Skip to content

Commit

Permalink
Merge pull request #16 from holysoles/hyperlink_support
Browse files Browse the repository at this point in the history
parse paragraphs for hyperlinks and convert
  • Loading branch information
holysoles authored Dec 14, 2024
2 parents e70b128 + 65675da commit 7d72aa3
Show file tree
Hide file tree
Showing 8 changed files with 99 additions and 35 deletions.
26 changes: 21 additions & 5 deletions app.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
from flask import Flask, request, render_template
from werkzeug.middleware.proxy_fix import ProxyFix
from flask_minify import minify
import yaml
import re
from os import listdir
from os.path import isfile, join, splitext
import yaml
from flask import Flask, request, render_template
from werkzeug.middleware.proxy_fix import ProxyFix
from flask_minify import minify

app = Flask(__name__)

Expand Down Expand Up @@ -55,16 +55,32 @@ def load_post_data(yaml_files, timeline = {}):
with open(join(posts_dir, yaml_file), 'r') as file:
post_data = yaml.safe_load(file)

# parse and preload code snippet files
# perform pre-processing
if post_data.get('body'):
for body in post_data['body']:
# parse and preload code snippet files
if body.get('code'):
with open(join(code_snippet_path, body['code']), 'r') as file:
code_snippet = file.read()
body['code'] = code_snippet
if body.get('text'):
body['text'] = parse_hyperlinks(body['text'])
post_array.append(post_data)
return post_array

def parse_hyperlinks(paragraph: str)-> str:
markdown_link_re = re.compile(r'\[[^\[\]]*\]\([^\(\)]*\)')
text_re = re.compile(r'(?<=\[)[^\[\]]*(?=\])')
link_re = re.compile(r'(?<=\()[^\(\)]*(?=\))')
all_hyperlinks = markdown_link_re.finditer(paragraph)
for hyperlink in all_hyperlinks:
hyperlink_match = hyperlink.group()
text = text_re.search(hyperlink_match).group()
link = link_re.search(hyperlink_match).group()
new_hyperlink = f"<a href=\"{link}\">{text}</a>"
paragraph = paragraph.replace(hyperlink_match, new_hyperlink)
return paragraph

@app.route("/", methods=['GET'])
def home():
yaml_files, timeline = get_posts()
Expand Down
6 changes: 3 additions & 3 deletions blog/posts/2020_4_13.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
title: "Donating CPU Cycles to Fight COVID-19! "
date: "4/13/2020"
body:
- text: "Reading the title of this post might be a bit confusing, so let me start from the beginning. I was browsing Youtube this weekend when I saw ETAPRIME, who does Raspberry Pi and other SBC Computer projects, made a video with a similar title. I watched his video and learned about a program called BOINC, the Berkely Open Infrastructure for Network Computing. This program allows multiple computers to be given a distributed workload for solving complex problems, effectively acting as a supercomputer."
- text: "There are various projects that are run through the infrastructure, but one in particular, called Rosetta@home, is run by The Baker Lab at the University of Washing, which is dedicated to understanding the complex structure and function of proteins. They have turned their attention to SARS-COV2 virus and recently were given extra help to allow their software to run on ARM based computers."
- text: "Reading the title of this post might be a bit confusing, so let me start from the beginning. I was browsing Youtube this weekend when I saw [ETAPRIME](https://www.youtube.com/@ETAPRIME/), who does Raspberry Pi and other SBC Computer projects, made a video with a similar title. I watched his video and learned about a program called BOINC, the Berkely Open Infrastructure for Network Computing. This program allows multiple computers to be given a distributed workload for solving complex problems, effectively acting as a supercomputer."
- text: "There are various projects that are run through the infrastructure, but one in particular, called [Rosetta@home](https://boinc.bakerlab.org/), is run by The Baker Lab at the University of Washing, which is dedicated to understanding the complex structure and function of proteins. They have turned their attention to SARS-COV2 virus and recently were given extra help to allow their software to run on ARM based computers."
- text: "I spent the weekend setting up my Raspberry Pi 4 with 4GB RAM to run uninterrupted on the Rosetta@home Project, since I wasn't currently using it for any personal projects. However after checking the available workload page, I noticed there was a lot more work available to be done by Intel based CPUs, so I also set up my PC to run BOINC and Rosetta@home overnight, as I wouldn't be using my laptop at night anyway. I'm happy to be contributing in any way I can while following stay at home orders!"
- text: "You can view my total contributions at the link below. Moving forward in the next few days I will be setting up BOINC on my old laptop, as well as another SBC computer I have, an ODROID XU4, so that I can contribute as much as possible!"
- text: "https://www.boincstats.com/stats/-1/user/detail/43a209fd451d84a19b724a29d3a6bc8f"
- text: "Check out [my stats here](https://www.boincstats.com/stats/-1/user/detail/43a209fd451d84a19b724a29d3a6bc8f)!"
2 changes: 1 addition & 1 deletion blog/posts/2020_4_5.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ title: "Sneaker Bot (Python and Selenium)"
date: "4/5/2020"
body:
- text: "Having been an avid sneaker collector since high school, I've always had an eye open to combining my technical knowledge with my love for sneakers. In the sneaker community it has become very commonplace to use \"bots\" to purchase the latest sneaker releases, since time and repetitive attempts are key to being able to get a pair. Back in high school had written a simple Javascript program to automate a few button clicks through a Chrome browser extension, but now that my coding experience and knowledge has increased, I thought I would take a crack at making a fully encapsulated desktop program."
- text: "I chose to use Python as the primary language for this project as I haven't built a GUI before in Python and I thought that would be a good exercise. I was able to find the popular automated web testing framework Selenium. Selenium Webdriver gives the ability to control a browser window through code, such as window locations, button clicks, and more. It is a very straightforward package to install and use, as well as having a popular community with good resources. I built the GUI using elements from PyQt, a Python binding of the popular cross-platform toolkit Qt. This gave me easy access to pre-built elements like buttons, text field, and more, which I was able to access as objects through my code."
- text: "I chose to use Python as the primary language for this project as I haven't built a GUI before in Python and I thought that would be a good exercise. I was able to find the popular automated web testing framework [Selenium](https://www.selenium.dev/). Selenium Webdriver gives the ability to control a browser window through code, such as window locations, button clicks, and more. It is a very straightforward package to install and use, as well as having a popular community with good resources. I built the GUI using elements from PyQt, a Python binding of the popular cross-platform toolkit Qt. This gave me easy access to pre-built elements like buttons, text field, and more, which I was able to access as objects through my code."
- section_title: "Current Features"
- text: "- The list of URLs, sizes selected, and proxy info can be saved to a custom \".hcp\" format file allowing for import and export."
- text: "- Preview images of the selected sneaker."
Expand Down
2 changes: 1 addition & 1 deletion blog/posts/2022_5_13.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
title: "Configuring a Hub-Spoke VPN with WireGuard"
date: "5/13/2022"
body:
- text: "When I recently moved, I unfortunately found that my ISP used double-NAT for their customers. This meant for services I run on my home network that don't support IPv6, such as Plex, or my file share, I was unable to access them externally. To address this, I identified that a hub-spoke VPN configuration would allow me to access my home network when on the go. I choose WireGuard as the VPN protocol for a multitude of reasons: it is highly efficient compared to older protocols like OpenVPN or IPSec, is natively included in the Linux Kernel starting with version 5.6, and is configurable via a typical network interface."
- text: "When I recently moved, I unfortunately found that my ISP used double-NAT for their customers. This meant for services I run on my home network that don't support IPv6, such as Plex, or my file share, I was unable to access them externally. To address this, I identified that a hub-spoke VPN configuration would allow me to access my home network when on the go. I choose [WireGuard](https://www.wireguard.com/) as the VPN protocol for a multitude of reasons: it is highly efficient compared to older protocols like OpenVPN or IPSec, is natively included in the Linux Kernel starting with version 5.6, and is configurable via a typical network interface."
- text: "By utilizing a server that is publicly accessible​, you can route bi-directional traffic from a client not on-prem, into your home network:"
image: 'hub-spoke-setup-diagram.png'
- text: DNS for clients is routed back to my home DNS server (Pi-hole), with my internal domain configured as the search domain. This allows me to perform DNS lookups for clients on my home network, as well as my pi-hole for ad blocking on the go"
Expand Down
6 changes: 3 additions & 3 deletions blog/posts/2023_11_23.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ date: "11/23/2023"
body:
- section_title:
text: "A recent project I worked on was improving the fault tolerance of my home network, specifically DNS. Previously, I was running a single instance of
Pi-hole, which filters out unwanted DNS queries, and forwarded the rest to my upstream Windows Domain Controllers with integrated DNS. From there, queries go
[Pi-hole](https://pi-hole.net/), which filters out unwanted DNS queries, and forwarded the rest to my upstream Windows Domain Controllers with integrated DNS. From there, queries go
out to a public resolver. This approach had a few drawbacks. Two issue stemmed from that the Pi-hole instance was running bare-metal on a Raspberry Pi, which
while usually reliable, was not tolerant of hardware issues. Patching the Raspberry Pi or rebooting it for other reasons would also cause a DNS service outage,
which was undesirable. The Raspberry Pi was also being used for other services which occasionally could introduce undesirable system load. Another issue I
Expand All @@ -13,7 +13,7 @@ body:
Pi-hole instances."
image: "DNS_diagram.png"
- section_title: "NetScaler Configuration"
text: "Getting a NetScaler instance up and running is actually pretty easy, since as of v12.1, Citrix offers a Freemium licensing option, which is bandwidth
text: "Getting a NetScaler instance up and running is actually pretty easy, since as of v12.1, Citrix offers a [Freemium licensing option](https://docs.netscaler.com/en-us/citrix-adc/current-release/licensing.html), which is bandwidth
restricted to 20 Mbps and doesn't provide access to certain features like GSLB or Citrix Gateway, but neither limitation is an issue for this use case.
Configuring a simple load balancer for servers on a NetScaler isn't particularly difficult and many general guides exist. At a high level, you need to:"
- text: "- Define the servers that will provide the DNS service."
Expand All @@ -26,7 +26,7 @@ body:
image: "netscaler_healthcheck.png"
- section_title: "Pi-hole Container Setup"
text: "The upstream Pi-hole instances are configured with Docker Compose, deployed as containers on a Docker Swarm cluster, and managed via Portainer.
I opted for Docker Swarm over a more complex tool like Kubernetes given the relatively low complexity of this project's requirements.
I opted for [Docker Swarm](https://docs.docker.com/engine/swarm/) over a more complex tool like Kubernetes given the relatively low complexity of this project's requirements.
I may follow up with migrating these containers to being managed with Kubernetes in the future. Creating a Docker Swarm and joining nodes to it is fairly
straightforward, and Docker's own documentation is pretty great for those steps (link). Managing these Pi-hole containers via Docker Compose and deploying
them to the cluster was more complex since not a lot of reference documentation existed. To the side is the Docker Compose YAML used for this. A couple
Expand Down
Loading

0 comments on commit 7d72aa3

Please sign in to comment.