DoesmycodeworkMy code doesn't workhttps://doesmycode.work/en-usThe internet is becoming obsoletehttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/How governments are taking the free internet away from us.Fri, 08 Aug 2025 00:00:00 GMT<p>You have probably seen the recent movement of governments to mass surveillance with the idea of "protecting children". Since we still have the right to free speech I have my own take on it too.</p> <h2>More and more tracking</h2> <p>A few years ago -I am not talking about the 90s or even 2000s but instead the 2015-2020 era- the internet was a free place where you could chat, play games and discover things without compromising your anonymity at any point. Since the coronavirus pandemic, governments all around the world started tracking their people for the better or for the worse.</p> <p>We reached a point where we would need to send an SMS to the government just to be able to go for a walk. Nevertheless we accepted this <em>unnecessary</em> tracking and we moved on. But since coronavirus ended governments lost the ability to track people...so, a new idea arose. Chat control and ID verification.</p> <p>The European union started pushing for stricter tracking in the internet with the idea of "protecting children". All this in a world where children can learn how to use the most random free VPN app on the appstore to bypass restrictions -and probably sell their data while doing so-.</p> <h2>Letting the government choose what's appropriate and what's not</h2> <p>With the new EU ideas for chat control and ID verification we are essentially allowing the government to become our "big brother" and decide what we should or should not see or type. As you can imagine these systems could either be exploited by malicious actors leaking millions if not billions of chat messages that can contain very sensitive information, or leak ID cards and passports making identity theft easier than ever. Apart from this, having access -and probably being able to control- the private messages of people can be used as a massive cyber weapon by governments to control people's views on different matters. All this can be avoided by educating parents on how to safely monitor and protect their children online, something they should be doing themselves without relying on the government. If parents feel like they can't control their children in the internet, they might as well not allow their children to access it until they are in the appropriate age of understanding how to protect themselves.</p> <h2>Developers are on their own when it comes to implementing monitoring systems</h2> <p>Even though the EU forces websites to comply and implement ID verification, they don't provide a platform to handle the verification nor guides on how to properly handle user data. This means that every developer is forced to implement such system without any prior knowledge on how to do so, resulting in two outcomes. They will either implement their own system that will probably be insecure and leak thousands of identities (looking at you Tea) or they will trust a third party companies handling the verification that can and probably will get hacked leaking millions of identities. In the end it's a loss for both the developers and the end users.</p> <p>Apart from verification, the EU is planning to monitor every single message you send for what they think is malicious. We don't yet know how and if they will implement this, but it will probably be the same situation as the above. The developer will be responsible of monitoring the messages and if he doesn't, fines and more fines. This results in the same issue as above, one mistake by the developer and thousands of messages get leaked, one mistake from a third-party and millions of messages leaked. Apart from this, chat control will have to be implemented client side (since server side wouldn't be possible with end to end encryption) meaning that every chat app will either have to get a shiny backdoor breaking any encryption or leave the EU market.</p> <h2>Democracy is not a thing anymore</h2> <p>Whether you like it or not, democracy is just not a thing anymore. It's not about asking people their opinions on a subject, but crafting the subject in a way to trick them, just like chat control and the ID verification. The idea of protecting children online is convincing enough for enough parents to agree with mass surveillance. The worst part is that nobody is doing anything to stop it. In order for the EU to implement such idea they have to make it pass. Since there isn't a limit on how many times they can "suggest" the same thing, they will keep pushing it with different names and slightly modified (to the worse) ideas. First we had chat control, now protect EU and probably some stupid name in the future until enough people are convinced to make it happen.</p> <h2>Final thoughts</h2> <p>Maybe in the future enough people will have understood how terrible this idea is and push the EU to revert it. Until then we will have to prepare because the internet will soon no longer be free.</p> Extending LUKS volumes in Windows and Fedora dual boothttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/Turns out resizing encrypted partitions is not that easy.Thu, 10 Jul 2025 00:00:00 GMT<h1>Extending LUKS volumes in Windows and Fedora dual boot</h1> <p>Sometimes you have a setup where you firstly installed Windows and then some distribution like Fedora where you can opt-in for disk encryption using LUKS and a btrfs volume. This is amazing until you need to extend one of the operating system's partition, this where things get <em>tricky</em>. After spending a reasonable amount of time trying to make my own laptop work without losing any data I am confident I can share a guide on how you can do it too.</p> <h2>Requirements</h2> <p>Before starting this process, I <strong>strongly</strong> advise you to create a backup since a lot of things can go wrong. Additionally you will need an 8GB+ USB drive which we will use to live boot in order to resize the partitions.</p> <h2>Flash the USB drive</h2> <p>Firstly you will need to flash the USB drive with an operating system of your choice. Since I am using Fedora as my daily Linux distribution I chose it for the process of resizing the partitions. I recommend <strong>against</strong> the GParted live boot option as it is limited to only partitioning and lacks support for drivers for networking so you won't be able to copy paste any commands or browse the internet.</p> <h2>Creating a new btrfs partition</h2> <p>After you boot in the live environment, I firstly recommend connecting to the internet and installing GParted. This is because we will use a combination of GParted and Gnome Disk Utility to create our partitions.</p> <p>After you have both tools installed, launch GParted, unlock your drive and shrink it. I decided to split exactly in half but you can create a larger volume if you like. In any case the new partition <strong>must</strong> be either the <strong>same</strong> size or <strong>larger</strong>. Keep in mind that the space you leave to your current partition will be the free space you will be able to allocate to other partitions later on. After you split the partition you can apply the changes and you should end up with a some free space next to your Linux partition.</p> <p>Back to the Gnome Disk Utility, select your free space and click the <code>+</code> icon to create a new partition. For storage select all available space and for Type select Other, you can also set a volume name if you like. In the next window select your file system (which will be btrfs) and enable LUKS encryption if you like. Finally make sure both your new and old partitions are unlocked and note down the Device paths of both partitions.</p> <h2>Copying the btrfs data</h2> <blockquote> <p>[!NOTE] All commands from now on should be run as <code>root</code>.</p> </blockquote> <p>Now that we have our new partition, we need to copy the data from the old one. This can be done by firstly creating a mountpoint for our data:</p> <pre><code>mkdir /mnt/data </code></pre> <p>Then we can mount the partition data using:</p> <pre><code>mount /dev/mapper/luks-uuid-of-old-partition /mnt/data </code></pre> <blockquote> <p>[!NOTE] In this example it's assumed you are using LUKS encryption. If you are not using encryption your device path will probably look something like <code>/dev/sdaX</code>.</p> </blockquote> <p>Now it's time to add the new partition to our current btrfs file system, this can be done with:</p> <pre><code>btrfs device add /dev/mapper/luks-uuid-of-new-parition /mnt/data </code></pre> <p>Now our file system contains two partitions but only one holds data. Let's change that by balancing our files:</p> <pre><code>btrfs balance start --full-balance /mnt/data </code></pre> <blockquote> <p>[!WARNING] This process can take from some minutes to some hours depending on your files, it took ~5 minutes for my ~50GB worth of files but it could take more or less time depending on your system specs. <strong>Do not</strong> interrupt the balance process as if something goes wrong you can risk losing access to your data.</p> </blockquote> <p>This command is silent but you can monitor the balance process by running:</p> <pre><code>btrfs balance status /mnt/data </code></pre> <p>After the balance process finishes we need to confirm that everything was copied correctly, we can do so by using the file system show command:</p> <pre><code>btrfs filesystem show /mnt/data </code></pre> <p>You should see your files split in half across the two partitions. We only need the new partition so let's remove the old one:</p> <pre><code>btrfs device remove /dev/mapper/luks-uuid-of-old-parition /mnt/data </code></pre> <p>This command will also take some time since it will need to move all the files from the old partition to the new one. After it's done we can check our new file system with the show command:</p> <pre><code>btrfs filesystem show /mnt/data </code></pre> <p>If everything went well you should only have one partition with all of your data.</p> <h2>Fixing GRUB and crypttab</h2> <p>Before continuing we need to change some things in GRUB and in crypttab so as our drive can be recognized and boot normally.</p> <p>For the crypttab part you need to edit the <code>/etc/crypttab</code> file with your favorite editor and replace the <code>luks-some-uuid</code> and <code>UUID=some-uuid</code> with the new UUID of the partition.</p> <p>For GRUB you need to edit the <code>/etc/default/grub</code> and replace <code>GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-some-uuid-of-old-partition rhgb quiet"</code> with your new partition UUID alongside with the <code>luks-</code> prefix.</p> <p>Now you can unmount the filesystem by running <code>umount /mnt/data</code>.</p> <h2>Fixing the boot partition</h2> <p>The boot partition still holds some outdated UUIDs we need to update. This can be done by mounting the boot partition which is the tiny 1GB partition right before the Fedora one. After you obtain the path you can create a mountpoint:</p> <pre><code>mkdir /mnt/boot </code></pre> <p>And mount the partition using:</p> <pre><code>mount /dev/your-partition /mnt/boot </code></pre> <p>Now we need to edit the <code>/mnt/boot/grub2/grub.cfg</code> file and replace all instances of the old UUID with the new UUID. This can be done with the <code>sed</code> command like so:</p> <pre><code>sudo sed -i 's/luks-uuid-of-old-partition/luks-uuid-of-new-partition/g' /mnt/boot/grub2/grub.cfg </code></pre> <p>We also need to do the same for the GRUB configuration files. This can be done by going to the config directory:</p> <pre><code>cd /mnt/boot/loader/entries </code></pre> <p>And again using the <code>sed</code> command to replace the old UUIDs with the new ones.</p> <pre><code>sed -i 's/luks-uuid-of-old-partition/luks-uuid-of-new-partition/g' *.conf </code></pre> <p>And now we can also unmount the boot partition with:</p> <pre><code>umount /mnt/boot </code></pre> <h2>Moving the free space</h2> <p>Since our data now live in the right partition we can delete the left one and have some free space. Then in this free space we can move the boot partition all the way to the right with GParted. Make sure <strong>not</strong> to change the size of the boot partition, it should be exactly 1024MB. After moving the boot partition you will have the free space next to your Windows installation and from there you can use the Windows Disk Management tool to extend your partition to the maximum available space.</p> <h2>Booting into Fedora and final touches</h2> <p>After rebooting you should be able to boot into Fedora normally, I recommend pressing the escape button while loading to check for any errors. After you boot I recommend generating the GRUB config again because even though we fixed it with the <code>sed</code> commands we can ensure we didn't miss anything since GRUB will use the <code>/etc/default/grub</code> config file to generate everything. You can generate the GRUB config by using:</p> <pre><code>grub2-mkconfig -o /etc/grub2.cfg grub2-mkconfig -o /etc/grub2-efi.cfg </code></pre> <p>Reboot one last time to use the updated GRUB configs and you should be good to go!</p> <h2>Conclusion</h2> <p>All in all, even though the process may seem dangerous and complex it can definitely help you avoid the need to format just for reorganizing your storage. This guide should also apply for <em>borrowing</em> storage from the Windows partition by following the steps in the reverse order <em>although it is not tested</em>. In any case you should ensure you have proper backups before attempting such procedures.</p> My new project, Tinyauthhttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I just made Tinyauth, the easiest way to secure your apps with a login screen.Fri, 07 Feb 2025 00:00:00 GMT<p>First of all, happy new year everyone!</p> <p>Sooo, I recently created a new project called Tinyauth. Tinyauth is a simple authentication middleware made (mostly) for traefik but I am planning to extend it to other proxies too <em>it already works with caddy!</em> I wanted to share you my journey making this and why I am so hyped about it. So let’s get started!</p> <h2>Why make Tinyauth</h2> <p>I always wanted some simple authentication for my apps, of course, there is Authentik and Authelia but I consider them way too heavy for my use case. Why do I have to edit 100 config files or use a ton of resources for a simple login page? That’s where Tinyauth comes, it is super lightweight since it’s written in Go and only uses environment variables for configuration. It is also completely stateless requiring no database or persistent storage.</p> <h2>Understanding how forward auth works</h2> <p>At start forward auth seemed really simple, the basic idea is that the user makes a request to some app, then instead of immediately forwarding the request to the app, the proxy asks the forward auth middleware and expects either a 200 to allow the flow to continue or 400 to block access, traefik and caddy also support a 302 status message to immediately redirect the user to the app. This should be really simple right? Well, not really, it took me some time to figure out how to correctly redirect the user to the login page and back but I eventually figured it out and authentication became easy. Since the app is stateless, everything is handled client side, this means that when the user logins the Gin sessions middleware generates a small cookie containing the user email and an expiration date that’s later send to the client. This unfortunately doesn’t allow the server to invalidate a session but it is the best possible login method that doesn’t require storage which I don’t really want to implement in the app.</p> <h2>OAuth is easy</h2> <p>I initially thought that it would be super hard to implement but after making a simple Go method to quickly initialize OAuth clients, I just needed to make a robust and extensible provider system that can automatically use the correct provider so as my API never needs to know which provider to use but instead use the same functions and let the providers handle everything. After some rewrites my code worked perfectly and now everything is organized and I can add a new provider with very few lines. Apart from the backend the frontend only needed to store a redirect cookie that will automatically redirect the user to the app once the OAuth provider does the callback. That’s it! OAuth is done!</p> <h2>Documentation</h2> <p>Documentation was undoubtably the hardest part of this project, I needed to write multiple guides for the different OAuth providers and when dealing with the 50 different screens of Google, documentation quickly becomes a huge pain. Additionally I needed to make sure my documentation is up to date with all the changes I was doing to the configuration and all of the new features I was adding. But, with the help of the awesome framework vitepress, I managed to spin up a beautiful documentation site that’s fast and integrates with the Github CI/CD so I can just commit and everything gets deployed automatically.</p> <h2>Conclusion</h2> <p>All in all, Tinyauth is my favorite project so far, it makes the authentication space in homelab fast and easy. Feel free to check it out in Github <a href="https://github.com/steveiliop56/tinyauth">here</a>, I also made a <a href="https://discord.com/invite/eHzVaCzRRd">Discord server</a> so we can chat about self-hosting and fix your Tinyauth issues. That’s all for now, have fun!</p> How to network boot your Raspberry Pihttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/A simple guide on setting up one or multiple raspberries to network boot from a debian server.Tue, 24 Dec 2024 00:00:00 GMT<p>Sooo, I always found the idea of just plugging in a raspberry pi with an ethernet cable and a power cord and then magically booting from a server without an SD card fascinating! For this reason I decided to dig a little deeper on how PXE booting works and how I can make my own raspberry pi compatible PXE boot server. I compiled this guide below on how to set up your own server using all the information I could find on the internet. So here it is…</p> <h2>How does PXE boot work?</h2> <p>PXE boot consists of two parts. The TFTP server and the NFS server. The TFTP server is responsible for detecting the raspberry pi and serving the boot files, this includes the kernel, overlays and configuration files like <code>cmdline.txt</code> and <code>config.txt</code>. If you like you can put the boot files in an SD card and only retrieve the root filesystem from the server but then network booting would be pointless so in this guide we will use both the TFTP server and the NFS server so we can have a true SD card free boot experience. So the way the raspberry pi works with the TFTP server is that it looks either for boot files in the root of the server or in a subdirectory with the raspberry pi serial number as a name, we can use this to our advantage since we can provide different configuration files for each raspberry pi and point them to different NFS shares for different root filesystems.</p> <h2>PiServer</h2> <p>Back in the old days there used to be a software called PiServer that was available on the desktop version of raspberry pi os <em>the one you run on x86 machines</em>, but this desktop version does no longer seem maintained since it is stuck on debian buster which is really not suitable for the newer models. This is why will need to setup everything manually without a fancy GUI.</p> <h2>Requirements</h2> <p>But first requirements… We need some supplies to make this happen.</p> <ul> <li>An x86 machine running debian bookworm (or the latest version of debian, bookworm is the latest one at the time of writing), ideally connected over ethernet to your network</li> <li>A raspberry pi <em>of course</em> (everything will work except the zeros since they don't support network booting)</li> <li>An SD card (we will need that to make sure our firmware is up to date)</li> <li>A power supply for the raspberry pi</li> <li>An ethernet switch (your router’s ethernet ports will work too)</li> <li>An ethernet cable</li> <li>A monitor (we need to get the raspberry pi serial number)</li> <li>A micro HDMI to HDMI cable to connect the raspberry pi to the screen</li> </ul> <blockquote> <p>[!NOTE] You could get the raspberry pi serial number and mac address through raspberry pi os too, I just used the go to method of plugging it to a display and reading the status.</p> </blockquote> <h2>Steps</h2> <p>So enough of the technical part, let's set it up! Here is a step by step guide on all of the steps you should follow:</p> <blockquote> <p>[!WARNING] You will need to run all commands as root in your debian server so if you aren't root please switch user with <code>sudo su</code> or <code>su</code>.</p> </blockquote> <h3>Step 1</h3> <p>Connect an SD card to your computer and flash the latest bootloader firmware for network boot. You can do this by opening Raspberry Pi Imager selecting your raspberry pi version (e.g. 4, 5, 3 etc.) and for operating system select misc utility images, then bootloader and the last option which should be network boot. After you finish writing, plug the SD card into your raspberry pi, power it in and wait until the green led starts flashing, when it does power off the pi and remove the SD card, we will not need it anymore.</p> <h3>Step 2</h3> <p>Now you should plug your raspberry pi in with a monitor attached and look for the <code>board</code> section when the bootloader screen appears. It should look something like this:</p> <p><code>board: abcdef abcdefgh ab:ab:ab:ab:ab:ab</code></p> <p>From there the second string is your serial number (so in the example above the <code>abcdefgh</code>) and the final string (so the <code>ab:ab:ab:ab:ab:ab</code>) is your mac address. Please note these down as we will need them in the next steps of the guide.</p> <h3>Step 3</h3> <p>Now back in our fresh Debian install we need to create some directories.</p> <p>Firstly for holding our operating system:</p> <p><code>mkdir -p /srv/nfs/pi4-1</code></p> <blockquote> <p>[!NOTE] You can use any name you like here, I just used <code>pi4-1</code> for convenience. If you do change the name, make sure to replace it on all of the commands below.</p> </blockquote> <p>And secondly for holding our boot files:</p> <p><code>mkdir -p /srv/tftpboot/your-pi-serial-number</code></p> <p>We also need to set the permissions of this directory with the following command:</p> <p><code>chmod -R 777 /srv/tftpboot</code></p> <h3>Step 4</h3> <p>Now it is time to set a static IP address for our server. In order to do this you need to get your interface name. To get it, I made a small one liner to give you both your current IP address and interface name. Type this command:</p> <p><code>ip route get 1 | awk '{print $7; print $5}' | paste -sd ' '</code></p> <p>And it should give you an output like this <code>192.168.1.1 eth0</code>, where the first part is your IP and the second one is your interface name.</p> <p>So now it is time to disable DHCP for our interface. To do this create a file using nano:</p> <p><code>nano /etc/systemd/network/10-your-interface.netdev</code></p> <p>Add the following content:</p> <pre><code>[Match] Name=your-interface [Network] DHCP=no </code></pre> <p>Then save and exit with <code>CTRL</code> + <code>X</code> then <code>Y</code> and <code>ENTER</code>.</p> <p>Now we need to manually assign an IP address to our interface. To do so we need to create a network file, this can be done with the following command:</p> <p><code>nano /etc/systemd/network/11-your-interface.network</code></p> <p>Inside this file we add the following content:</p> <pre><code>[Match] Name=your-interface [Network] Address=10.42.0.211/24 DNS=10.42.0.1 [Route] Gateway=10.42.0.1 </code></pre> <p>In this file you also need to set the <code>Address</code> to your server’s IP we got before and make sure to attach the <code>/24</code> in the end. Additionally you need to set the <code>DNS</code> and <code>Gateway</code> to your router’s IP. Your router’s IP is most likely your server’s IP with a 1 in the end. For example, if your server’s IP is <code>192.168.1.15</code>, your router’s address should be<code>192.168.1.1</code> .</p> <p>Finally we need to set our DNS correctly, this can be done by editing the following file:</p> <p><code>nano /etc/systemd/resolved.conf</code></p> <p>And replacing the contents with:</p> <pre><code>[Resolve] DNS=10.42.0.1 </code></pre> <p>Make sure to set the <code>DNS</code> to the IP address of you router.</p> <p>Last but not least restart the networking service with:</p> <p><code>systemctl enable systemd-networkd</code></p> <p>And reboot:</p> <p><code>reboot</code></p> <h3>Step 6</h3> <p>Now it is time to install <code>dnsmasq</code>, this service will host our TFTP directory containing our boot files. To install it run the following command:</p> <p><code>apt install tcpdump dnsmasq -y</code></p> <p>Then enable it with:</p> <p><code>systemctl enable dnsmasq</code></p> <p>Now it is time for a small test to make sure everything is set up correctly up to this point. To check run this command:</p> <p><code>tcpdump -i eth0 port bootpc</code></p> <p>And plug your raspberry pi in (without an SD card) and connect an ethernet cable <em>of course</em>. After some time you should see something like this:</p> <p><code>IP 0.0.0.0.bootpc &gt; 255.255.255.255.bootps: BOOTP/DHCP, Request from ab:ab:ab:ab:ab:ab...</code></p> <p>Where <code>ab:ab:ab:ab:ab:ab</code> should be your pi’s MAC address. If you see more requests from other devices don’t get worried, you can safely ignore them, we are just looking for the pi’s MAC.</p> <p>After ensuring everything works it’s time to configure <code>dnsmasq</code>, to do so firstly clear the config with:</p> <p><code>echo | tee /etc/dnsmasq.conf</code></p> <p>Then edit it with:</p> <p><code>nano /etc/dnsmasq.conf</code></p> <p>And add the following content:</p> <pre><code>port=0 dhcp-range=10.42.0.255,proxy log-dhcp enable-tftp tftp-root=/srv/tftpboot pxe-service=0,"Raspberry Pi Boot" </code></pre> <p>Make sure to replace <code>10.42.0.255</code> with your own network’s broadcast address. To get it, get your server’s IP from the previous step and replace the last part with 255. For example if your server’s IP is <code>192.168.1.2</code>, your broadcast address is <code>192.168.1.255</code>.</p> <h3>Step 7</h3> <p>Now it is time to set up our operating system. To do so we need to download the image with:</p> <pre><code>cd /srv wget https://downloads.raspberrypi.com/raspios_arm64/images/raspios_arm64-2024-11-19/2024-11-19-raspios-bookworm-arm64.img.xz </code></pre> <p>This is for the <code>arm64</code> version, if your raspberry pi doesn’t support arm64 you can use this URL:</p> <p><code>https://downloads.raspberrypi.com/raspios_armhf/images/raspios_armhf-2024-11-19/2024-11-19-raspios-bookworm-armhf.img.xz</code></p> <blockquote> <p>[!WARNING] These are the latest versions of the images at the time of writing. Before following the next steps please visit https://www.raspberrypi.com/software/operating-systems/, right click the appropriate download button and get the latest image link.</p> </blockquote> <p>Now it’s time to extract it with:</p> <p><code>unxz 2024-11-19-raspios-bookworm-arm64.img.xz</code></p> <p>And now create some loop devices from our image with:</p> <p><code>kpartx -av 2024-11-19-raspios-bookworm-arm64.img</code></p> <blockquote> <p>[!NOTE] If you get <code>bash: kpartx: command not found</code> you can install it with <code>apt install -y kpartx</code></p> </blockquote> <p>Now it’s time to mount our loop devices, this can be done by firstly creating our mountpoints:</p> <p><code>mkdir -p /tmp/{boot,os}</code></p> <p>And then mounting the loop devices:</p> <p><code>mount -o loop /dev/mapper/loop0p1 /tmp/boot/</code></p> <p><code>mount -o loop /dev/mapper/loop0p2 /tmp/os/</code></p> <p>After this, we need to copy our data with:</p> <p><code>cp -r /tmp/os/* /srv/nfs/pi4-1</code></p> <p><code>mkdir -p /srv/nfs/pi4-1/boot</code></p> <p><code>rm -rf /srv/nfs/pi4-1/boot/*</code></p> <p><code>cp -r /tmp/boot/* /srv/nfs/pi4-1/boot</code></p> <p>Finally we can unmount our mounts and loop devices with:</p> <p><code>umount /tmp/os/</code></p> <p><code>umount /tmp/boot/</code></p> <p><code>kpartx -dv 2024-11-19-raspios-bookworm-arm64.img</code></p> <p>And restart <code>dnsmasq</code> :</p> <p><code>systemctl restart dnsmasq</code></p> <h3>Step 8</h3> <p>Now it is time to install our NFS server, this can be done with:</p> <p><code>apt install nfs-kernel-server -y</code></p> <p>Before proceeding we need to create a bind mount for our boot directory, this can be done with the following command:</p> <p><code>echo "/srv/nfs/pi4-1/boot /srv/tftpboot/your-pi-serial-number none defaults,bind 0 0" &gt;&gt; /etc/fstab</code></p> <p>And mount the directories with:</p> <p><code>systemctl daemon-reload</code></p> <p><code>mount -a</code></p> <p>Then we need to create our exports:</p> <p><code>echo "/srv/nfs/pi4-1 *(rw,sync,no_subtree_check,no_root_squash)" | tee -a /etc/exports</code></p> <blockquote> <p>[!NOTE] You can replace the asterisk (<code>*</code>) with the IP that your router gives to your pi so only this pi can access this filesystem. To get your pi’s IP, hook it up without an SD card to a monitor with an ethernet cable attached and look for <code>YI_ADDR</code></p> </blockquote> <p>Finally we need to restart all services to detect the new files:</p> <pre><code>systemctl enable rpcbind systemctl restart rpcbind systemctl enable nfs-kernel-server systemctl restart nfs-kernel-server </code></pre> <h3>Step 9</h3> <p>Now we need to remove the unused mounts from our <code>fstab</code>, this can be done with:</p> <p><code>sed -i /UUID/d /srv/nfs/pi4-1/etc/fstab</code></p> <p>We also need to tell the pi where the NFS share is by editing the <code>cmdline</code> file with this command:</p> <p><code>nano /srv/nfs/pi4-1/boot/cmdline.txt</code></p> <p>Delete everything after <code>root</code> (including it) and add the following:</p> <p><code>root=/dev/nfs nfsroot=your-ip:/srv/nfs/pi4-1,vers=3 rw ip=dhcp rootwait</code></p> <p>And finally create out user account with:</p> <p><code>echo pi:$(openssl passwd -6 raspberry) &gt; /srv/nfs/pi4-1/boot/userconf.txt</code></p> <p>This command creates an account with username <code>pi</code> and password <code>raspberry</code>.</p> <p>(Optional) We can also enable SSH by running this command:</p> <p><code>touch /srv/nfs/pi4-1/boot/ssh</code></p> <p>Lastly I noticed that we need to fix some small permissions issues which can be fixed by running these commands:</p> <p><code>chown -R 1000:1000 /srv/nfs/pi4-1/home/pi</code></p> <p><code>chown root:root /srv/nfs/pi4-1/usr/bin/sudo</code></p> <p><code>chmod 4755 /srv/nfs/pi4-1/usr/bin/sudo</code></p> <h3>Step 10</h3> <p>And we are done! Now you should be able to plug your raspberry pi in, with nothing but the power cable and an ethernet cable and it should automatically pick up the OS and boot!</p> <h2>Conclusion</h2> <p>So that’s it! We just made a raspberry pi boot from a server automatically through the network! With this setup you can create as many filesystems as you like and network boot a lot of raspberries for companies, schools, homelabs and for fun <em>of course</em>. Thanks a lot to <a href="https://www.reddit.com/r/raspberry_pi/comments/l7bzq8/guide_pxe_booting_to_a_raspberry_pi_4/">this</a> reddit post for providing a lot of information on how to achieve network boot and match the functionality of the PiServer software. That’s it for now…see ya!</p> Rebuilding my website from scratch using Astrohttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/Let me show you how I rewrote my website from the ground up using the Astro web framework!Thu, 21 Nov 2024 00:00:00 GMT<p>Sooo, I recently came across a cool framework called <a href="https://astro.build/">Astro</a>. Astro is a web framework designed for statically generated sites like blogs and documentation websites. It's relatively new being just 3 years old but it's widely used by the community so I thought why not give it a try? And now here we are with my website fully powered by Astro!</p> <h2>Why switch from Hugo?</h2> <p>From the beginning of my blog I wasn't really happy with Hugo, while it's true that it is an extremely fast and powerful static site generator it has some things I don't like. To begin with, while I love the go programming language, I hate go templating. Coming from react, go templating seems way too complicated and messy and I am definitely not willing to learn it. Attached to it I would add that I also didn't really like the file structure of Hugo. All these layouts and partials seem way too complex for me, can I just RTFM and understand how to use them? Yes. Will I? No. Lastly with Hugo it felt like I didn't have the control I wanted over my website, it didn't feel like I built this website, I just took a template and filled it with content and I didn't like that.</p> <h2>My experience with Astro</h2> <p>Developing my website with Astro was a really fun and interesting experience. I initially wanted to base my website of a default template since I didn't have any experience on how to build blogs, so it was a challenge for me to follow all the best practices but later I decided to build it from scratch because…why not? I am a big <a href="https://tailwindcss.com/">Tailwind CSS</a> fun because I find it way more organized to have my styles in the appropriate elements instead of having a massive 300 line CSS file. So, I was really happy to see that Astro supported both Tailwind and Typescript out of the box! Apart from all these, writing my blog was relatively easy. I created my base layout files which was super easy since it's just using Html and CSS. Concerning the posts section, Astro handles everything by default, even pagination! I just had to make the posts section and a button that when clicked would go to <code>/posts/some-number</code> and that's it! I also really liked the fact that Astro has some cool transition hooks built-in to make your site feel smooth with just one line of code, yes one line! Last but not least, I had to make my main, about and projects pages which was an extremely easy and fast process.</p> <h2>The hardest part</h2> <p>For me the hardest part was making the comments section. I am using <a href="https://giscus.app/">giscus</a> which is basically a comments section for static sites based on GitHub discussions. It's really nice and it works amazingly by default. The issue is when you want to make it switch between dark and light themes. The only solution I found to using a custom theme based on user preference was to render the comments section after the page has loaded, so we can use the theme from local storage/device preference. After the initial load you need to send a message to the giscus iframe to change the theme. Not the best solution but if it works…don't touch it.</p> <h2>Conclusion</h2> <p>On balance switching to Astro was a very fun and educational experience. I learned a lot on how to make a website get 100% on the lighthouse report and on how to make a responsive design. If you are interested in making your own blog from scratch I strongly advise you to give Astro a try and trust me, you won’t regret it!</p> Hyperpixel screen with Raspberry Pi OS Bookworm, the ultimate guidehttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/The ultimate guide in getting the Hyperpixel screen working in Raspberry Pi OS BookwormTue, 29 Oct 2024 00:00:00 GMT<p>Sooo, a lot of things have changed in Raspberry Pi OS since the buster version that the Hyperpixel screen was originally configured to work with. To be more specific there have been a lot of major changes including new kernels, rewritten parts of the desktop and the most important, the change to Wayland from x11. Pimoroni has tried its best to maintain compatibility with all these changes and they have managed to get the screen working, but by working we mean that the screen displays the desktop, that’s where they left the rest for us. I have spent a lot of time trying to create the optimal setup because I refuse to buy the official 7 inch display due to how heavy and big it is. So let me show you how I set up my Raspberry Pi 5 with the Hyperpixel 4’ inch display to have the best experience.</p> <blockquote> <p>[!NOTE] This blog post is about the October 22nd 2024 release of Raspberry Pi OS, which switched from the wayfire compositor to the labwc one.</p> </blockquote> <blockquote> <p>[!NOTE] I recommend you use a physical keyboard and mouse instead of a VNC connection because of the rotations the screen will do and the changes that will happen to the cursor.</p> </blockquote> <h2>Changes in <code>config.txt</code></h2> <p>As far as the <code>config.txt</code> file goes we only need to do some small changes to enable the screen driver in the kernel. This can be easily done by editing the <code>/boot/firmware/config.txt</code> file and adding the following lines after the <code>[all]</code> section:</p> <pre><code># Hyperpixel dtoverlay=vc4-kms-dpi-hyperpixel4 dtparam=rotate=90 </code></pre> <p>This basically loads the Hyperpixel kernel overlay and rotates the screen 90 degrees, in a landscape configuration (meaning that the USB-C and HDMI ports will be on the top side and the USB ports on the left). After you edit your config save and exit then reboot. When the Pi boots up again you should see the desktop in a portrait configuration (meaning that the USB-C and HDMI ports are on the right and the USB ports on the top) but don’t worry we will fix that now.</p> <h2>Rotating the screen correctly</h2> <p>Now it is time to set our screen in a landscape configuration. This can be done by firstly opening the Screen Configuration tool (located in the Preferences menu). There you should see your main display called DPI-1 if you have more displays attached make sure to apply all the changes to this specific one. Right click in the display and set the following options:</p> <p>Active: <code>yes</code> (should be enabled by default)</p> <p>Resolution: <code>480×800</code> (should be the only one)</p> <p>Frequency: <code>60.061Hz</code> (should be the only one)</p> <p>Orientation: <code>Right</code></p> <p>Touchscreen: <code>11-005d Goodix Capacitive TouchScreen</code> (you may not have the exact same but you should just only have one option, so pick that one)</p> <p>Brightness: <code>100%</code> (should be the default one)</p> <p>When you are done click Apply, the screen should rotate along with the touch and the Screen Configuration tool will show you a small popup with a countdown asking you to click OK if everything is good, there just click OK.</p> <blockquote> <p>[!NOTE] If you are using VNC your cursor will get inverted when you click apply so you will have to close the connection and reconnect and then you can click OK in the popup.</p> </blockquote> <p>And we are almost done.</p> <h2>Enabling the on-screen keyboard</h2> <p>Chances are that you want to have an on-screen keyboard if using a touchscreen, Raspberry Pi OS should technically autodetect the screen and enable it for you but in my testing I noticed that it wouldn’t show the keyboard on things like the login screen, but don’t worry because we can ensure that the keyboard is always on. This can be done by opening the Raspberry Pi Configuration tool (located in the Preferences menu), then going to the Display section and changing the On-Screen Keyboard to Enabled Always. After enabling it you should also see a small keyboard icon on the top right of your screen which can be used to toggle the keyboard (the keyboard auto appears on text inputs but some times it can be annoying so this button allows you to hide it e.g. on terminal windows).</p> <h2>Screen blanking</h2> <p>As of 31/10/2024 we managed to identify and solve the issue in the wayvnc server which would control the display's power state at all times meaning that <code>wlopm</code> wouldn't work. This is now fixed and a new wayvnc package is probably released as an update, meaning that the only step you have to do to enable screen blanking is update your system and then just turn on the setting in the Raspberry Pi Configuration tool and everything should work.</p> <h2>That’s it!</h2> <p>And that’s it! You should now have a fully functional setup with your Raspberry Pi and the Hyperpixel 4’ inch display. If you have a Raspberry Pi 5 and a 3D printer you may be interested in printing yourself a cool little case I made <a href="https://www.printables.com/model/1026124-hyperpixel-raspberry-pi-5-case-v2">here</a>, I also have a version for the Raspberry Pi 4 <a href="https://www.printables.com/model/867835-hyperpixel-4-inch-display-and-raspberry-pi-5-case">here</a> but I would recommend the one made from asmoll01 <a href="https://www.thingiverse.com/thing:4095591">here</a>. I hope this guide helped you clearing things up on this mess and I will try to keep it up to date if any changes occur. Have fun!</p> Making presencyhttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I made presency, a simple why to customize the Discord Rich Presence because...why not?Sat, 12 Oct 2024 00:00:00 GMT<p>Sooo, Discord has a really cool feature called Rich Presence, this allows Discord to show your friends the current game you are playing or your current activity etc. Some projects like <a href="https://github.com/leonardssh/vscord">vscord</a> that use this Rich Presence API to display what projects you are currently working on. So, I decided to make my own little helper app that allows you to set any status you want in Rich Presence because….it’s cool and why not?</p> <h2>Electron is bad</h2> <p>My first Google search was “how to make a GUI app with TypeScript”, the first link was Electron <em>should have never clicked that</em>. In the beginning it looked cool, a framework that allows you to make cross platform desktop apps with frameworks like React. I was quite impressed so I started playing around with it. I immediately found out that I had to use another app called Electron Forge which is specifically built for TypeScript support. So, I created an app using their script, installed some dependencies <em>a whole lot of dependencies</em> and started developing. The developing experience wasn’t bad since I was using React and Vite so everything looked familiar. After making my UI, I decided to try and build the app to test if it works as expected on Windows. I run the command and…it failed. Apparently I needed to install both Mono and Wine for this to compile the app, after installing 2GB worth of dependencies that I won’t use ever again, I tried the command again and guess what? It failed, saying something about no such file or directory in some <code>/tmp</code> path. I tried debugging it but no luck, so I did the same thing that a developer like me would do, I <code>rm -rf</code> the entire directory and started looking for another way to make GUI apps.</p> <h2>Wails is amazing</h2> <p>Since I had no luck with making a GUI with TypeScript based solutions, I started looking for a framework written in my next favorite language, Golang. After some research I found an impressive amount of frameworks but Wails seemed the best <em>and it actually was the best</em>, so I chose that. The installation was extremely easy, I just installed their CLI with Go and run the create app command. After this, I run their awesome <code>wails doctor</code> command which automatically detected and installed the missing dependencies which where <strong><em>far</em></strong> less than what electron required and since it is based in Go, no need for things like Wine because Go compiles natively.</p> <h2>The developer experience with Wails</h2> <p>A very important feature all frameworks should have is a developer experience that doesn’t suck and Wails has this feature. To begin with it uses React and Vite for frontend with TypeScript app creation built-in, so all the frontend part was extremely easy since I could install my favorite libraries and frameworks like React Hook Forms and Tailwind. Additionally an awesome feature Wails has is that it doesn’t try to integrate Go in the React part but instead separates them in a way that is type safe and automatic. By this I mean that when you write your Go functions handling the backend, you write them like this:</p> <pre><code>func (a *App) Hello(name string) (string) { return fmt.Sprintf("Hello %s!", name) } </code></pre> <p>And in turn Wails automatically adds a new function in JavaScript:</p> <pre><code>export function Hello(arg1) { return window["go"]["main"]["App"]["Hello"](arg1); } </code></pre> <p>And its types in TypeScript:</p> <pre><code>export function Hello(arg1: string): Promise&lt;void&gt;; </code></pre> <p>In this way you can import all your Go backend functions in your frontend and use them like this:</p> <pre><code>import { Hello } from "../wailsjs/go/main/App"; export const function HelloText() { return &lt;h1&gt;{Hello("Stavros")}&lt;/h1&gt; } </code></pre> <p>I find this extremely simple and amazing to work with. The only inconvenience I had was that for some reason it gave my Zod object as a map in the Go function but it was easily solvable but not that type safe since I had to use <code>type Form map[string]string</code> for the type.</p> <h2>Writing Presency</h2> <p>Writing Presency was fairly simple as it’s not a complex project, so most of my time was spent in the CSS to ensure the UI looked as close to the Discord UI as possible. In the beginning I didn’t plan to make the project public but I think it would be nice to demonstrate how awesome Wails is and also make a fun Discord Helper app. The project is available on my GitHub <a href="https://github.com/steveiliop56/presency">here</a>.</p> <h2>Presency Limitations</h2> <p>Due to how the Rich Presence API works in order to use custom images you need to create an app in the Discord Developer Portal <a href="https://discord.com/developers/applications">here</a> and upload all your icons under the Rich Presence - Art Assets section. Then just copy your Client ID from the OAuth2 section and paste it in the app. Another tiny limitation Presency has is that it doesn’t have an option to add a button <em>you know, the Join button</em> but I may <em>or may not</em> add it in the future.</p> <h2>Conclusion</h2> <p>Making Presency was a really cool experience for me since I got to learn how to make cool GUI apps with my favorite programing languages. Additionally now I can set my Rich Presence to whatever I want which is a bit funny. Was it worth the 3 hours I spent? Hell no. Do I find it cool? Hell yeah!</p> How to get Let's Encrypt SSL certificates with Nginx Proxy Managerhttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/Getting Let's Ecnrypt SSL certificates is this easy with this tool!Tue, 01 Oct 2024 00:00:00 GMT<p>Sooo most of us use an IP:Port combination for our self-hosted services, but what if I told you that you can get trusted SSL certificates generated by Let’s Encrypt for free! Wouldn’t that be awesome? Well it is awesome and it is possible with Nginx Proxy Manager. Let me guide you through on how to set it up.</p> <h2>Requirements</h2> <p>In order to follow this tutorial you need the following things:</p> <ul> <li>A virtual machine/LXC/server with a static IP address and ports 80, 81 and 443 free</li> <li>A domain (you can use DuckDNS and get a <code>domain.duckdns.org</code> subdomain if you like).</li> <li>An email address</li> </ul> <h2>Setting up the DNS records</h2> <p>Firstly we need to setup the DNS records of our domain to point to the IP address of our server. This is quite different for every domain registar so you will have to do a quick google search on how to add DNS records to your domain. After you find out how you need to add these 2 records:</p> <ul> <li><code>domain.com</code> pointing to <code>your-server-internal-ip</code> as an A record</li> <li><code>*.domain.com</code> pointing to <code>your-server-internal-ip</code> as an A record</li> </ul> <blockquote> <p>[!NOTE] If you don’t want to use the root of your domain, you can also do <code>somesubdomain.domain.com</code> and <code>*.somesubdomain.domain.com</code></p> </blockquote> <h2>Obtaining an API token</h2> <p>Now we need to get our API token, again this is different for every domain registar but a quick google search on how to get an API token should do the trick. For registars like Cloudflare where you can select a template make sure to select the <code>DNS Edit</code> template as it will make everything easier.</p> <h2>Setting up Nginx Proxy Manager</h2> <p>Setting un Nginx Proxy Manager (which I will refer to as npm) is relatively easy since it is only a docker container. So make sure you have docker and docker compose installed on your server. When you are done create an <code>nginx-proxy-manager</code> directory and inside create a <code>data</code> directory, this folder will contain your npm data. Now create a <code>docker-compose.yml</code> file with the following content:</p> <pre><code>services: nginx-proxy-manager: image: jc21/nginx-proxy-manager:latest container_name: nginx-proxy-manager restart: unless-stopped ports: - 80:80 - 81:81 - 443:443 volumes: - ./data/data:/data - ./data/letsencrypt:/etc/letsencrypt </code></pre> <blockquote> <p>[!NOTE] I am using the <code>latest</code> image tag here which is not the best practice. I would recommend you visit the <a href="https://github.com/NginxProxyManager/nginx-proxy-manager">Github</a> page and use the latest version as tag.</p> </blockquote> <p>Now you can simply do <code>docker compose up -d</code> and after a minute or so npm should be accessible on <code>http://yourmachinesip:81</code>. There you can login with <code>[email protected]</code> and password <code>changeme</code>. After logging in you will be immediately prompted to change your password and set a new email address and name.</p> <h2>Generating the SSL certificates</h2> <p>Everything is now ready for us to generate our certificates, so spin up your npm dashboard and go to the SSL Certificates tab. There click the Add Certificate button and select Let’s Encrypt. In the menu that pops up add <code>domain.com</code> and <code>*.domain.com</code> in the Domain Names section (or a subdomain if you used that). After this fill in your email address and flip both the Use DNS Challenge and the I Agree to the Let’s Encrypt Terms of Service switches. When you flip them an extra menu will appear prompting you to chose your domain registar. After you select your option, a textbox will appear where you should fill in your API token (your email address may be required too for some providers). When you are done click Save and then npm will start generating the certificates, there is a chance it fails the first time but don’t worry, just click Save again and it should generate them. When it’s done a new entry should appear in the SSL Certificates table with your domain and that’s it, your certificates are now generated!</p> <h2>Adding your first service</h2> <p>Adding services is really easy, just click the Hosts button and select Proxy Hosts from the drop down menu. There click Add Proxy Host and a menu should appear, in this menu firstly add your service’s domain (for example <code>myservice.domain.com</code>), for Scheme select whatever your service is using, for example Portainer uses SSL on port 9443, so if you use that make sure to select https in the scheme. Now fill in the IP address and port where your service is hosted. Finally flip the Websockets Support switch (a lot of apps use websockets for communication so if you don’t enable this you may face issues with some apps). For the SSL part go to the SSL tab, select the certificate we just generated from the drop down menu and if you want to only access the app with https flip the Force SSL switch. Last but not least, click Save and that’s it! When you go to <code>myservice.domain.com</code> your browser should say it is secure and have the lock icon.</p> <h2>Conclusion</h2> <p>All in all generating certificates with npm it really easy and I believe that a lot of people aren’t using it thinking it’s complex but with apps like npm, SSL has became easy. Another reason people aren’t using SSL is because they think they need to buy a domain but in reality you can just use DuckDNS and get your own free subdomain. That’s it for now, see ya!</p> Hugo vs Jekyll vs Ghost what should you chose?https://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/There are a lot of blog-building engines and frameworks out there but what should you choose?Mon, 09 Sep 2024 00:00:00 GMT<p>So, I recently started a blog (you probably noticed) and I was using Ghost in the beginning but I later switched to Hugo, a static website generator, but why? And what's best for your own blog? Let's dive into it...</p> <h2>Why did I switch?</h2> <p>So Ghost was a quite "expensive" option for my tiny blog, by this I mean that in order to run Ghost you need a database and the Ghost app itself running on a docker host. At the time my router didn't have port forwarding enabled, (due to some ISP configurations) so my only option was Cloudflare Tunnels. It seemed to work fine in the beginning but then it got terribly slow, like 4-6 seconds to load the website which was massive for a tiny blog like mine. Furthermore in August I went on holidays and unfortunately I had to power off my server where the website was hosted. I could move to a VPS but for a website this small it's a bit stupid to do so. For these 15 days I was on holidays and my website had the maintenance page, I of course lost all visitors and got removed from Google search results which was terrible. The plan was to move to something that could be hosted on Github pages or something similar and that's why I switched to Hugo.</p> <h2>Ghost</h2> <p>As I mentioned above Ghost was my first blogging platform, it's really a full blogging platform with a sleek UI and everything you could ever need for your blog, it is also designed for handling a lot of audiences and it has support for newsletters, member exclusive posts etc.</p> <h4>Pros</h4> <ul> <li>Very sleek UI</li> <li>Support for members and member only posts</li> <li>Support for newsletter</li> <li>Very user-friendly and easy to customize</li> <li>Optimized for a big audience</li> </ul> <h4>Cons</h4> <ul> <li>Requires an email server</li> <li>You cannot convert posts back to markdown (without an external tool)</li> <li>Requires a VPS or your own server to run since it needs a container platform like docker</li> <li>It has the least stars on Github</li> </ul> <h2>Hugo</h2> <p>Hugo is what powers my current website, it is very expensive but of course it has its drawbacks too.</p> <h4>Pros</h4> <ul> <li>Crazy fast</li> <li>The most starred blogging platform on Github</li> <li>It can be hosted on Github pages or any CI/CD that supports deploying a website</li> <li>Single file configuration for the entire website</li> <li>A ton of themes including not only blog related ones</li> <li>Single binary file to create and compile the whole website</li> </ul> <h4>Cons</h4> <ul> <li>It uses Go templates to create or customize themes which is hard to understand</li> <li>The file structure can be confusing to new users</li> <li>Can be tricky to extend</li> </ul> <h2>Jekyll</h2> <p>Jekyll is another static site generator written in Ruby, the main reason for its popularity is the ability to extend it using plugins and that the fact that it's using liquid templates.</p> <h4>Pros</h4> <ul> <li>Can be easily extended through the use of plugins</li> <li>Uses liquid templates which are easier to understand</li> <li>Older and bigger community</li> <li>It can be hosted on Github pages or any CI/CD that supports deploying a website</li> <li>Single file configuration for the entire website</li> </ul> <h4>Cons</h4> <ul> <li>Binary not provided, you need to install Ruby to use it</li> <li>Less supported content types (compared to Hugo)</li> <li>Requires a lot of plugins for almost everything (compared to Hugo which comes with a lot out of the box)</li> </ul> <h2>Should you use a static site generator?</h2> <p>The most famous question among these projects is whether you should use a static site generator like Jekyll or Hugo, or use a full blogging platform like Ghost. I believe it all comes down to the audience and blog type, for your personal blog that may be getting 50-100 visitors (or even more) per month a static site generator is for you. Attached to this I would add that Hugo and Jekyll can be also used as information pages compared to Ghost which can't. Now on the other hand, if you have a blog with 1000+ monthly visitors, Ghost is for you due to its ability to manage big blogs and handle comments very well.</p> <h2>Conclusion</h2> <p>All in all, these 3 platforms are extremely powerful and guarantee that you will have a great experience making your blog, it all comes down to your audience, post writing preference (using a GUI or your terminal/code editor) and language skills (go/ruby/none). I would personally recommend Hugo if you want a static site generator and just want to use and theme and forget about it, Jekyll if you know Ruby and/or want to extend your site a lot and Ghost for beginners. The thing is, it doesn't matter what you will use when starting out, just pick what you like the best and start writing!</p> Trying to use Ubuntuhttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I tried to (once again) use Ubuntu and (once again) failed.Sat, 24 Aug 2024 00:00:00 GMT<p>So, I've been using windows since the day I got my first laptop, why? Because I am loving it. But I am starting to see Microsoft changing things in a way I don't like. These changes don't affect me since I am using my windows laptop with a Microsoft account so I am avoiding all the bloat. However, that doesn't mean I believe that people actually paying 120 euro or more for a Windows key and still getting ADs in the operating system which they paid for isn't really nice. So, after my friends annoyed me for a while, I decided to try Ubuntu (once again). Let's see what happened...</p> <h2>Installing Ubuntu</h2> <p>Installing Ubuntu is really easy, you flash a USB drive with the ISO, reboot and install...not this time though. This time Microsoft fixed a security issue in the GRUB bootloader and with that it <em>accidentally</em> killed all dual boot setups with an error message saying <code>Something has gone seriously wrong</code>...yeah, I think we noticed Microsoft. For dual booters it was quite a pain to fix it but for me it was as easy as just disabling Secure Boot before installing and then enabling it again. After doing this I was able to install just fine.</p> <h2>Fractional scaling</h2> <p>One of the biggest issues I am facing with Linux is fractional scaling, fractional scaling essentially zooms your screen, why is that useful? Because I have a HIDPI display and with <code>100%</code> zoom I can't see anything <em>I mean I can but everything is way too small</em>. I could use X and fractional scaling but since it's not really that stable, I simply changed my font size to <code>1.5</code> and zoomed my icons to 48 pixels. Everything looked good.</p> <h2>Customizing</h2> <p>Next step was to customize everything a bit because the default Ubuntu desktop is just...meh. I installed the <a href="https://www.gnome-look.org/p/1357889/">Orchis</a> theme which is really nice, installed the <a href="https://github.com/PapirusDevelopmentTeam/papirus-icon-theme">Papirus Icon Theme</a>, disabled the default Ubuntu dock and installed <a href="https://extensions.gnome.org/extension/307/dash-to-dock/">Dash To Dock</a> a much better dock with much more customizability. Finally I installed <a href="https://snapcraft.io/install/bing-wall/ubuntu">Bing Wall</a> to get the awesome Bing wallpapers.</p> <h2>Installing my apps</h2> <p>Installing my apps was relatively easy, I opened the Appstore clicked some install buttons and everything worked. I also downloaded some <code>deb</code> packages for some extra software and everything was looking good.</p> <h2>The disaster</h2> <p>I wanted to install two of my favorite apps, Lunar Client and Viber. So, I opened Chrome, searched <code>Viber Linux download</code> clicked the first link and I was presented with...an app image, really? An app image? Anyway, I downloaded it, made it executable and run <code>./Viber.AppImage</code>, error, fuse is required. No big deal, I opened Chrome, searched <code>Install fuse ubuntu</code>, clicked the first link...<code>sudo apt install fuse3</code>, I run the command, fuse installs, I run the app image, Viber launches, I am happy. Now in order to make it easier to run I made a desktop file to add Viber to my apps (same for Lunar Client). Everything seems fine until...I noticed my file manager is missing, I said <code>Huh that's weird</code> and opened the apps menu and searched for <code>Files</code>, nothing. What? I opened the terminal, run <code>nautilus</code>, command not found. What the f*ck? I opened Chrome, searched <code>Ubuntu nautilus missing</code>, I stumble across a forum post saying <code>Ubuntu desktop removed after installing fuse</code>, I think <code>Oh no</code>, I reboot, login screen, enter my password, nothing, I click the session setting and would you look at that...MY WHOLE DESKTOP JUST GOT UNINSTALLED!</p> <h2>Removing Ubuntu</h2> <p>Yeah, I am not going to sit here reinstalling the whole thing, I straight up rebooted to UEFI settings, set <code>Windows Boot Manager</code> as the first option, booted Windows, opened disk management and deleted the Ubuntu partition without a second thought. Then I opened a CMD prompt, mounted the <code>System</code> partition and deleted GRUB. <em>did I mention I hate GRUB?</em></p> <h2>Conclusion</h2> <p>Ubuntu is just...not for me. I hate it. A lot. From now on I will never install Linux on my laptop again, except WSL, we all know WSL is the best developer thing, right? Now somebody will say <code>You should have been more careful and read the message before hitting enter</code> and I respond to that by saying my operating system should have been more careful by not letting me uninstall my desktop without saying <code>ARE YOU SURE BRO?</code>. Have you heard of windows uninstalling itself? Because I haven't. Anyway, from now on I will be the biggest Windows fanboy out there until...either Microsoft makes this operating system bloatware or Ubuntu gets too good. That's it. Windows for life!</p> Using OneDev for Repository Backupshttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I just discovered this awesome tool called OneDev so let's use it for local repository backups!Sat, 17 Aug 2024 00:00:00 GMT<p>You probably have seen the recent <a href="https://www.theverge.com/2024/8/14/24220685/github-down-website-pull-request">GitHub Outage</a> were GitHub was down globally! I was never afraid that I would lose any data but I thought "What if something happened and some of my repositories were affected?". I don't have any backups so the thought of losing the code I spent hours or days working on and that I use as reference to my new projects was a bit scary. So, let's make offline mirrors!</p> <h2>My issues with Gitea</h2> <p>As you probably know <a href="https://gitea.com">Gitea</a> is the most popular self-hosted git server but it's definitely not the best, at least for me. I used to make mirrors with Gitea using <a href="https://github.com/varunsridharan/github-gitea-mirror">this</a> awesome little script that automated the entire process. The script worked perfectly but I didn't like Gitea a lot because of 2 things. Firstly, the UI, I am not a fan of the Gitea UI at all. Secondly the actions, I don't like anything about them, apart from having the same format as GitHub actions but except this, everything else is just bad, the actions require a worker on some machine and...THERE IS NO MANUAL WORKFLOW BUTTON! LIKE ARE YOU KIDDING ME? <em>sigh</em> So that's when I started looking about a better alternative and that's when I came across OneDev.</p> <h2>OneDev</h2> <p>OneDev is just awesome! It can be deployed with one docker run line, it has builtin package registries, CI/CD, kanbans, dashboards and the best thing...an awesome UI that satisfies my ocd. So yes, let’s use OneDev on my raspberry pi 5 for offline repository backups! After looking through the documentation I discovered that it has a Repository Clone step that allows you to clone any repository from any git server. Let’s build our workflow!</p> <h2>The issue with the Repository Clone</h2> <p>So, when building my workflow to back up the repositories I stumbled across something funny... When you use the Repository Clone step if you don't use the force option it will fail and if you use the force option... it will delete the workflow! So, it's really just deleting itself every time, funny right? Let’s fix it. So, after about 1 hour of work, I came up with this workflow:</p> <pre><code>version: 35 jobs: - name: Clone Repo steps: - !CheckoutStep name: Checkout cloneCredential: !HttpCredential accessTokenSecret: access-token # Make sure you have an access token configured withLfs: false withSubmodules: true condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL - !PullRepository name: Pull Repo remoteUrl: https://github.com/someusername/somerepo # Change to source repo url refs: main # Change branch withLfs: false force: true condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL - !CommandStep name: Restore OneDev Workflow runInContainer: true image: ubuntu interpreter: !DefaultInterpreter commands: | # Install Git apt update apt install -y git # Configure Git git config --global user.name "User" # Change your name git config --global user.email "someone@@example.com" # Change your email, make sure to use @@ git config --global http.sslverify false git config --global pull.rebase true # Backup workflow cp .onedev-buildspec.yml ../workflow.yml # Pull git pull # Restorw workflow cp ../workflow.yml .onedev-buildspec.yml # Commit Workflow git add .onedev-buildspec.yml git commit -m "ci: add workflow back" # Push git push origin main:main useTTY: true condition: ALL_PREVIOUS_STEPS_WERE_SUCCESSFUL triggers: # Delete this section to only run manually - !ScheduleTrigger cronExpression: 0 15 10 ? * * # Chnage time (right now it is every day at 10:15AM) retryCondition: never maxRetries: 3 retryDelay: 30 timeout: 3600 </code></pre> <blockquote> <p>[!WARNING] Make sure to replace the workflow values with your own values.</p> </blockquote> <p>It is completely different from Github Workflows but it gets the job done. The only issue is that when you run commands and install packages, you have to do everything in one step else everything that's not in your current working directory will be deleted. No problem though, it works perfectly. Let me explain what it does:</p> <ul> <li>Firstly, we checkout the current code</li> <li>Secondly, we use the Clone Repository (which replaces our current repository's contents and deletes our workflow <em>lol</em>)</li> <li>Thirdly we install git, configure it, backup our current workflow file, pull (resetting the changes), add the workflow back and commit And that's it! We just backed up our repository! The workflow will automatically run every day at 10:15AM but you can delete the entire <code>triggers</code> section to run it only manually, I use that for some archived repos.</li> </ul> <h2>Conclusion</h2> <p>So overall I am extremely happy with how my workflow turned out and how easy it was to make in OneDev. If you are using Gitea, I blindly recommend you to give OneDev a try and I am sure you will like it! That's it for today...see ya!</p> Migrating my blog from ghost to hugohttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/It's time to move to a static website!Mon, 12 Aug 2024 00:00:00 GMT<p>So, you probably saw that my website changed a lot and that’s because I decided to switch my production website from ghost on premises to Hugo on GitHub pages and I believe that was a really good decision. Let’s discuss why I did and why.</p> <h2>Why did I switch?</h2> <p>For the last 2 weeks I had to go on holidays and unfortunately had to shutdown my main server running ghost, which means downtime and downtime is bad because google deindexes the website from search results. So, I decided to switch to a cloud provider to not have any downtime, but moving a ghost website and a MySQL database on the cloud would be expensive and pointless. Ghost is made for bigger blogs with multiple subscribers not for my tiny personal blog so hosting in on the cloud would be not free and pointless. Additionally, I don’t need features like comments and newsletters and I wanted to learn something new so let’s switch my blog to Hugo and make it integrate with GitHub pages and auto deployments with GitHub actions.</p> <h2>Getting started with Hugo</h2> <p>Getting started was relatively easy, I downloaded the Hugo binary and created a new website, then chose a theme and started adding content. The theme of my choice was PaperMod which you can see <a href="https://github.com/adityatelange/hugo-PaperMod">here</a>, I really liked it because it’s fast, responsive and well documented. Adding it was really easy, I just had to add a git submodule, configure it and that’s it! My site was ready!</p> <h2>Migrating my data from Ghost</h2> <p>The hardest part of moving to Hugo was migrating the data from Ghost and that’s because when you edit a Ghost post you can use markdown normally but when you write it, it’s done, you cannot export it back to a markdown file so I had to copy everything as plain text and then add the markdown specific things like the hashtags <code>##</code> for titles and these things <code>`</code> for code. Now all my blog posts are in markdown licensed under then GPL-3.0 license stored on my GitHub repository which you can find <a href="https://github.com/steveiliop56/steveiliop56.github.io">here</a>, this repository also contains my configuration for Hugo.</p> <h2>The disadvantages of Hugo and the theme I chose</h2> <p>While I am really happy with Hugo and PaperMod, I didn’t like 2 things. Firstly due to Hugo being entirely markdown I couldn’t make these nice information boxes like in ghost and I had to resort to using the quote method <code>&gt;</code> of markdown. Secondly with PaperMod I didn’t like the menu on mobile, the menu buttons are simply bellow the title and it doesn’t look good, I would prefer an option to use a hamburger menu.</p> <h2>Wrting a GitHub workflow to automatically deploy my website on github pages</h2> <p>One major thing I wanted to do with Hugo was to automated the deployment of the website on every commit because who wants to build the website manually? The workflow was really easy to set up and that's because Hugo provides a ready to go workflow which I just copy pasted and everything worked perfectly. That didn't stop me for making my own workflow in my personal <a href="https://onedev.io/">OneDev</a> instance so I can test locally.</p> <h2>Conclusion</h2> <p>Overall I am really happy with my new blog and that I managed to integrate it well with GitHub pages so I don’t have downtime. If you have any recommendations or issues with my blog you can open an issue on my GitHub repository <a href="https://github.com/steveiliop56/steveiliop56.github.io">here</a>, there you can also give me some feedback if you lie the new look compared to the old one because you know…I still have backups of ghost ;).</p> Doesmycode.work is up again!https://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I am back! And my server is back up!Sun, 11 Aug 2024 00:00:00 GMT<p>Hello! I am happy to announce that my blog is up again and I will continue posting stuff! Enjoy</p> Automating my homelab with semaphorehttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/Let me show you how I automated my entire homelab using ansible and semaphore.Wed, 17 Jul 2024 00:00:00 GMT<p>My homelab has a very big problem, I never update it, like never, except if it's absolutely necessary to. I wanted to solve this problem since being a couple of kernel versions behind in my Proxmox server wasn't really ideal. Let's see how I managed to automate updating everything through a simple web UI.</p> <h2>Trying to reinvent the wheel</h2> <p>While I know Ansible and I could have used it in the first place I decided it would be fun to reinvent the wheel and make my own update checking tool called puck. Puck standing for (Package Update Checking Kit) is a simple cli tool written in Go. I made it so you can easily specify all your servers in a yaml config and by running one command you get which servers need updates, it doesn't update them though. After being roasted on reddit, I understood that I indeed was trying to reinvent the wheel and make it worse, who wants to just check for updates and not update the system? It's Linux not windows. So I decided that it was time to fix my mistake and use ansible.</p> <h2>The problem with vanilla Ansible</h2> <p>Ansible is very powerful and very easy for automating literally everything, the only "problem" is that in some cases I don't want to run the playbook manually through a terminal, that's boring. Sure I could setup a simple LXC container and run Ansible with a simple cronjob but that's just boring. Is there a better solution? Of course there is and it's called Semaphore!</p> <h2>Semaphore UI</h2> <p>Semaphore is an awesome little tool designed to help run your ansible playbooks doing daily tasks (like updating) with one click from a nice web UI. It also runs in docker! So let's deploy it! Since it runs with docker we need a docker compose file. Here is a very simple one I made which also runs postgres as the database:</p> <pre><code>version: "3.9" services: semaphore: container_name: semaphore image: semaphoreui/semaphore:v2.19.10 restart: unless-stopped volumes: - ./data/repositories:/repositories environment: - SEMAPHORE_DB_USER=semaphore - SEMAPHORE_DB_PASS=somereallysecurepassword - SEMAPHORE_DB_HOST=semaphore-db - SEMAPHORE_DB_PORT=5432 - SEMAPHORE_DB_DIALECT=postgres - SEMAPHORE_DB=semaphore - SEMAPHORE_PLAYBOOK_PATH=/tmp/semaphore - SEMAPHORE_ADMIN_PASSWORD=somereallysecurepasswordagain - SEMAPHORE_ADMIN_NAME=somebody - [email protected] - SEMAPHORE_ADMIN=somebody - SEMAPHORE_ACCESS_KEY_ENCRYPTION=somethingreallysecure - SEMAPHORE_LDAP_ACTIVATED=no - TZ=SomeTimezone ports: - 3000:3000 semaphore-db: container_name: semaphore-db image: postgres:14 restart: unless-stopped volumes: - ./data/db:/var/lib/postgresql/data environment: - POSTGRES_USER=semaphore - POSTGRES_PASSWORD=somereallysecurepassword - POSTGRES_DB=semaphore </code></pre> <blockquote> <p>At the time of writing the image versions are the latest ones, make sure to replace the versions with the newest ones when running the docker compose file. You don't need to change the postgres version.</p> </blockquote> <blockquote> <p>Make sure to replace the passwords and the encryption keys throughout the compose file with something secure especially if you are planning to use this compose file in a production environment.</p> </blockquote> <blockquote> <p>Semaphore is available on the Runtipi Appstore too, so if you use Runtipi you can simply install it directly from there.</p> </blockquote> <p>So now we are ready to launch! <code>docker compose up -d</code> and Semaphore should be listening in port 3000.</p> <h2>Configuration</h2> <p>Now its time to do some basic configuration to semaphore, the initial setup. When you visit Semaphore for the first tune it will ask you to sign in and create a project, I named mine <code>homelab</code>, when you finish you should be presented with the Semaphore UI.</p> <p>Firstly I set up my ssh credentials, this is very easy to do just click on the Key Store tab and add your ssh keys or login credentials. After that I added my inventory files, again head over to the Inventory tab and add all your inventory files, when you add a new inventory you have to specify your ssh credentials which we set up earlier. The inventory file is just like the ansible one so you can specify all the variables you are familiar with like <code>ansible_become_password</code> which I used a lot. Time for our repository, Semaphore works with git repositories by default but it allows you to use folders too, in the compose file I showed you above I added a <code>/repositories</code> bind volume where you can place your playbooks without the need to use git, I prefer this option because sometimes playbooks can include sensitive information like passwords and tokens. Additionally we need to set up the environment, that's very simple I simply created a new environment and added a new variable with key <code>ANSIBLE_HOST_KEY_CHECKING</code> and value <code>false</code> to disable host key checking else my playbooks would fail. Last but not least we need to create a task, a task is basically all of the above steps combined so the playbook can run, you simply need to go to the Tasks tab create a new task and fill in the values, all the important ones are dropdowns where you just select what you created from the previous steps.</p> <p>When you are done you can click the task you just created and click run a terminal will pop up where you can see the output live but you can also close it and the task will continue in the background.</p> <h2>My most used playbook</h2> <p>I think its not hard to guess what my most used playbook is, it's of course the updating one. So I just created a dead simple ansible playbook:</p> <pre><code>- name: Update hosts hosts: all become: true tasks: - name: Update and upgrade apt: update_cache: yes upgrade: full autoclean: true autoremove: true clean: true </code></pre> <blockquote> <p>As you can see I used <code>hosts: all</code> to make the workflow work on every inventory file I use in my task.</p> </blockquote> <p>That wasn't enough though, triggering it manually is boring so I just went to the Schedule tab and created a simple task to run my playbook every Monday at 8 p.m.</p> <p>But that's still not enough! We need notifications right? Of course we need notifications, discord notifications! Discord notifications are super easy you just need to go your discord server, create a new channel, open the settings and go to the integrations tab. There you can create a new webhook and copy the URL. Your URL will look something like this: <code>https://discord.com/api/webhooks/somelongstring/someotherlongstring</code> where <code>somelongstring</code> is your webhook id and <code>someotherlongstring</code> is your webhook token, keep note of that. Now we simply add a new task to our playbook to notify us when our servers are done updating. Here is my full playbook:</p> <pre><code>- name: Update hosts hosts: all become: true tasks: - name: Update and upgrade apt: update_cache: yes upgrade: full autoclean: true autoremove: true clean: true - name: Send notification to discord community.general.discord: webhook_id: webhook id webhook_token: webhook token username: Semaphore content: ✅ Server {{ inventory_hostname }} up to date! </code></pre> <p>In this playbook just change the webhook id and webhook token to what you got above from discord. The <code>{{ inventory_hostname }}</code> part will send you a notification with the IP/hostname of each server in your inventory file. And that's it! That's how I automated my homelab!</p> <h2>Conclusion</h2> <p>From now on my servers will be up to date and I will receive notifications on discord about my updates! That's super awesome! And that's just the beginning of my automation journey with ansible...</p> Trying out Olivetinhttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I just discovered this awesome tool called Olivetin and it helped me automate my life.Thu, 04 Jul 2024 00:00:00 GMT<p>Sooo I host a minecraft server for me and my friends using Lodestone and then using ssh tunneling I expose it to the internet, my problem is that running the ssh command every time is annoying but at the same time I don't want to have my minecraft server exposed to the internet all day. Furthermore I wanted to play with a new self-hosted tool and that's when I discovered Olivetin, an app that does exactly this, predefined commands from a web UI.</p> <h2>Installation</h2> <p>The installation was very simple and straight forward, Olivetin offers literally every possible installation method including rpm and deb packages, docker containers, Kubernetes and of course building from source. I personally used docker compose to test the app and then proceeded to add it to Runtipi and continue using it this way.</p> <h2>Writing my own theme</h2> <p>The first thing I did was to find a way to use another theme because to be honest the default theme is meh. Luckily Olivetin has support for writing your own CSS theme and it even has some default themes. I didn't like the defaults so I decided to write my own and after 5 hours and 300 lines of CSS my theme was ready and I loved it. Later I opened a pull request to Olivetin to add my theme and now I can present you <a href="https://www.olivetin.app/themes/posts/buttonbox">ButtonBox</a> a clean, modern and stylish dark mode theme (no white mode here).</p> <h2>Features</h2> <p>I was honestly impressed with how many features this app has. To begin with it allows you to add arguments to your buttons, for example ask for confirmation, some value or selecting something from a list than gets passed to the command, awesome! When an action gets executed it stores the output so you can later view the logs from the web UI, it also allows you to dynamically see the output of a command while it gets executed! Last but not least it allows you to have multiple pages with different buttons and even folders and it has countless integrations including ping and docker support!</p> <h2>Community</h2> <p>The developer behind the app is an awesome guy interested in adding new features and making the app awesome, another cool thing is the response time, when I asked a question in Discord I got a response in minutes!</p> <h2>Conclusion</h2> <p>This app definitely deserves a place in my homelab and I am really happy with how it integrates with the rest of my workflow. Check it out <a href="https://www.olivetin.app">here</a> and if you like it give it a <a href="https://github.com/OliveTin/OliveTin">star</a>.</p> How to set up a Ghost websitehttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/Let me show you how to set up a Ghost website like mine!Sat, 29 Jun 2024 00:00:00 GMT<p>My website is a fun little project I made hosted on my homelab, but let's see how you can setup your own ghost website for almost free too!</p> <h2>Requirements</h2> <p>In order to make your own public blog you will need the following requirements:</p> <ul> <li>A domain name</li> <li>A Cloudflare account</li> <li>A Gmail/Mailgun/Amazon account (depending on what mail server you want to use)</li> <li>Some kind of server or virtual machine to host the website</li> </ul> <blockquote> <p>I am using Proxmox for my hypervisor, if you are using Proxmox too you can host ghost in an LXC container instead of a virtual machine.</p> </blockquote> <h2>Before you begin</h2> <p>Before exposing your ghost server to the internet I would recommend installing and enabling unattended upgrades to make sure your server will be secure, I would also recommend you setup the server in a VLAN if it is possible for even better security.</p> <h2>Setting up your email server</h2> <p>If you are planning to use Gmail for the email server you can skip this step, but if you want to use Amazon SES or Mailgun you will need to setup some DNS records and probably some other settings. You can check their documentation on how to do it.</p> <h2>Setting up ghost</h2> <p>The ghost setup is very easy and straight forward, all you need is a server/virtual machine/LXC container that can run docker. Then you can simply run this docker compose file to get ghost up and running:</p> <pre><code>version: "3.9" services: ghost: container_name: ghost image: ghost:5.86.2 restart: always ports: - 80:2368 environment: database__client: mysql database__connection__host: ghost-db database__connection__user: root database__connection__password: somesupersecurepassword database__connection__database: ghost mail__transport: SMTP mail__options__host: amazonsmtpserver mail__options__port: 465 mail__options__service: SES mail__options__auth__user: amazonsesuser mail__options__auth__pass: amazonsespassword mail__from: "'Somebody' &lt;[email protected]&gt;" url: https://mywebsite.com volumes: - ./data/content:/var/lib/ghost/content ghost-db: container_name: ghost-db image: mariadb:11.4.2 restart: always environment: MYSQL_ROOT_PASSWORD: supersecurepassword volumes: - ./data/database:/var/lib/mysql </code></pre> <blockquote> <p>Here you can see I am using <code>mariadb</code> for the database but you can use <code>mysql</code> too, I just had issue pulling the image.</p> </blockquote> <blockquote> <p>At the time of writing the versions in the docker compose are the latest, before deploying I would recommend checking docker hub for newer versions so you can start with the newest version immediately.</p> </blockquote> <p>Here you only need to replace the root password of your database, the <code>mail__from</code> environment variable and the <code>url</code> variable. If you prefer to use another service for emails instead of Amazon SES you can check the configuration page <a href="https://ghost.org/docs/config/#mail">here</a>. I would recommend against using Gmail since the emails ghost will send will come from a gmail.com address while with Mailgun or Amazon SES they will look like they will be coming from your domain.</p> <p>After you make your changes you can start ghost with the command <code>docker compose up -d</code> and then you should be able to access ghost in your configured URL, right now we don't have the domain configured so you can simply access ghost by the IP address of your server.</p> <h2>Setting up watch tower</h2> <p>Since we want to handle updates automatically we can can use watchtower to automatically update our docker containers to the latest version. The watchtower compose file is very simple and you don't need to modify anything.</p> <pre><code>version: "3.9" services: watchtower: container_name: watchtower image: containrrr/watchtower restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock </code></pre> <blockquote> <p>By default watchtower will update every container but you can use the configuration file to configure it as you like. <a href="https://containrrr.dev/watchtower">Here</a> is the documentation for it.</p> </blockquote> <p>After you finish modifying your docker compose file you can start watchtower using <code>docker compose up -d</code>.</p> <h2>Setting up Cloudflare Tunnels</h2> <p>Since we don't want to just punch holes in our router's firewall we are going to use Cloudflare Tunnels to securely expose our application to the internet. In order to do that you can go to the Zero Trust dashboard then Networks and lastly Tunnels. There you will need to click Create tunnel, and select Cloudflared. Then you will need to set a name for your tunnel. On the Install and run connectors page you need to select docker and copy the command. It will look something like this:</p> <pre><code>docker run cloudflare/cloudflared:latest tunnel --no-autoupdate run --token somereallbigtoken </code></pre> <p>From this command you only need this big token and use it in this compose file here:</p> <pre><code>version: "3.9" services: cloudflared: container_name: cloudflared image: cloudflare/cloudflared:2024.6.1 restart: unless-stopped command: tunnel --no-autoupdate run --token pasteyourtokenhere extra_hosts: - host.docker.internal:host-gateway </code></pre> <blockquote> <p>Again here I am using the latest version that exists in the time of writing, please check for newer versions on docker hub and use the latest one.</p> </blockquote> <p>Now run <code>docker compose up -d</code> and your connector should pop up in the Cloudflare dashboard, there you can select it and in the next section select your domain from the dropdown and add a subdomain/suffix if you like, for type select HTTP and for URL use <code>host.docker.internal</code>. Lastly click save and congratulations! Your site should be up and running in your domain name!</p> <h2>Next steps</h2> <p>If you want your site to get more viewers I would recommend adding it to Google Search Console so as it can get indexed in Google and you will be able to see status about the visitors etc.</p> <h2>Conclusion</h2> <p>By now you should have a ghost site running in your website so you can start documenting your homelab adventures just like me. Happy blogging!</p> Trying out arch linuxhttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/I can now say that I use arch btwFri, 28 Jun 2024 00:00:00 GMT<p>We all have heard of the famous "I use arch btw" phrase to indicate that someone for some reason is using arch, the hardest operating system out there. Well today I tried it myself so I can say that I use arch too, btw.</p> <h2>Fetching the iso</h2> <p>I am quite embarrassed to say that it took me half an hour to find where to download the iso installer, I am used to clicking the big old Download Iso button which just downloads the iso file, but not with arch, with arch you need to pick a mirror manually and then go to the mirror and download the thing, very complicated.</p> <h2>Installation</h2> <p>The installation is a pain, there is a tool that's like the alpine installer which does everything for you but then... What's the point? I of course chose the hardest route and started reading their installation guide. I started with setting my keyboard layout, language and timezone. Then I formatted my disk using <code>fdisk</code> which was hard because I couldn't understand that I simply had to type <code>+4G</code> for the partition to be created, spent another half an hour there. So after creating my partitions I installed the kernel, firmware and the Linux package, lol, that was funny and then installed grub. Then I rebooted only to see that I forgot to generate the grub config, so here we go again with the installer creating the grub config. After I finished with all that, arch booted! Amazing! Let's try to install a fricking editor because it doesn't have neither nano not vim. Oh common no internet? Yeah of course I forgot to configure the networking, one last trip to the installer to add some config and yeah! We have internet!</p> <h2>Installing a desktop environment</h2> <p>Now that we have a system to work with we can easily install the gnome desktop with one command <code>pacman -S gnome</code>, after 5 minutes all packages were installed successfully, then I had to enable the desktop environment using <code>systemctl enable --now gdm</code> and that was it! The desktop popped up immediately, I was honestly impressed! Then I just made some small customizations and everything worked fine.</p> <h2>Documentation</h2> <p>The documentation on arch is one of the best docs sites I have ever seen, it is really helpful, <em>I mean it needs to be with how hard the OS is</em>, and everything it says actually works. The only "issue" is that sometimes it may have a small section describing something important that I don't notice and skip an important step, <em>networking</em>.</p> <h2>Final thoughts</h2> <p>After installing the desktop environment, I stopped there, that was enough for me, but I have to admit it was a fun experience. Would I use it as my daily driver? No. Will I start telling my friends "I use arch btw"? Yes of course I will. Arch Linux is really fun but definitely not something you would want to run on your main computer, except if you know what you are doing, I don't, so I will keep being a hardcore windows user <em>lol</em>.</p> Casa OS vs Umbrel vs Runtipi what should you choose?https://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/There are a lot of docker management/self-hosting automation tools but what's the best?Thu, 27 Jun 2024 00:00:00 GMT<p>Sooo, a lot of people like using some sort of automated app deployment platform, so they can easily deploy and update their apps. The user-base can range from experienced users using these apps so as they don't have to manually write the compose files, to complete self-hosting beginners that just want to deploy Immich and Nextcloud. There are many apps that can help you achieve this, but the most popular ones are Runtipi, Casa OS and Umbrel. All the projects are really nice and can help you a lot when deploying your apps, but they have some small differences that may make you prefer one over the other. Let's see what's the best for your homelab.</p> <h2>Project Maintenance</h2> <p>All projects are actively maintained, but they have some differences in funding/apps/updates.</p> <h3>Casa OS</h3> <p>Casa OS is built and maintained by Ice Whale Tech which makes the popular SBCs Zimaboard and Zimablade which means that it has full time developers. The development is really active with frequent updates. Their app store is also actively maintained, with automated updates for apps.</p> <h3>Umbrel</h3> <p>Umbrel is the same as Casa OS, they also sell their own hardware called Umbrel Home which is a N100 based mini-pc with their OS preinstalled. This means that they also have full time developers, but their updates are a bit slower since they focus on making major releases with multiple features. Concerning the app store, it doesn't have an automated way to update apps so everything happens manually through contributions making updates a lot slower.</p> <h3>Runtipi</h3> <p>Unlike Casa and Umbrel, Runtipi doesn't have any funding apart from the donations made to its developer Nicolas. But this doesn't stop him from releasing multiple updates with fixes for any issues that may occur and new features, of course implementing complex features takes a bit more but it is worth the wait. The app store is very active since Runtipi uses the Renovate Bot to automate updates, so users get updates right after they are released.</p> <h2>Installation</h2> <p>The installation for both Runtipi and Casa OS happens with an easy one line bash script, that configures everything for you and gets you up and running in less than 5 minutes.</p> <p>On the other hand in order to install Umbrel, you need to download their ISO image and install it on your computer erasing all previous data which I think isn't really ideal.</p> <p>All three apps also have Raspberry Pi images alongside with their install script which is a big plus for me.</p> <p>The initial setup of all the apps is almost the same, as you are asked to create an account and you are ready to go.</p> <h2>Initial impression</h2> <p>Casa OS and Umbrel have a homepage where you can have app widgets, manage your applications and install new ones through the app store. The whole experience feels like a tablet since you are simply opening the app store and installing apps like you would on a tablet. Umbrel also includes a navigation dock on the bottom of the screen, which you can use to open settings, go to the app store etc.</p> <p>On the other side Runtipi aims for a much simpler look having only a header with the menu containing simple navigation links for the dashboard, apps, etc. The dashboard has 3 widgets, disk usage, RAM usage and CPU usage and unfortunately doesn't offer the ability to add custom ones.</p> <h2>App Store</h2> <p>Here the winner is Runtipi since it comes by default with a list of over 200 apps in comparison with Umbrel and Casa OS which only have around 100. However both Umbrel and Casa OS offer the functionality to add extra app stores for more apps. Runtipi doesn't have this functionality, it only allows you to change the app store not add new ones.</p> <p>Here I also have to mention that due to the past of Umbrel, where it was a Bitcoin node, it comes with an extensive list of Bitcoin related apps.</p> <h2>Additional features</h2> <p>All three apps have some features that make them special.</p> <h3>Casa OS</h3> <p>Casa OS offers:</p> <ul> <li>Filebrowser with support for Samba sharing</li> <li>External links to the dashboard</li> <li>Add your own apps by importing Docker CLI/Docker compose or filling a form</li> <li>Customize app compose through the UI</li> <li>Shell access to the host</li> <li>Logs (not for apps)</li> <li>Disk merging (something like RAID)</li> <li>Restart/shutdown buttons</li> </ul> <h3>Umbrel</h3> <p>Umbrel offers:</p> <ul> <li>Extensive widget support</li> <li>Shell access to the host and apps</li> <li>Log viewer for both apps and Umbrel</li> <li>Authentication for apps</li> <li>Expose apps to the TOR network</li> <li>Applications that integrate with the UI (Back that Mac up)</li> <li>OTA Updates</li> <li>Filebrowser</li> <li>Restart/shutdown buttons</li> <li>Resource monitor</li> </ul> <h3>Runtipi</h3> <p>Runtipi offers:</p> <ul> <li>Builtin Traefik that exposes your apps security with the HTTP/DNS challenge</li> <li>External links in the dashboard</li> <li>Guest dashboard for sharing apps without being authenticated</li> <li>Log viewer for both apps and Runtipi itself</li> <li>Documented API for controlling it without the Dashboard (so apps like <a href="https://github.com/steveiliop56/tipimate">tipimate</a> exist)</li> <li>Really nice CLI for starting/restarting/updating</li> <li>Ability to extend both an app's compose file and runtipi's compose file</li> <li>The entire app is dockerized and it is self-contained in one directory in the filesystem</li> </ul> <h2>Developer experience</h2> <p>Since I am a "developer" myself, I like checking the codebase of the apps I use, so I can fix potential bugs or add new features. I have worked on the Runtipi codebase and I can't help saying that it is the best compared to Casa OS and Umbrel. This is because the code is clean, well documented and uses the best practices. Next is Casa OS which is written in Go (compared to Runtipi and Umbrel which are written in Typescript), meaning that it is really lightweight and fast. I found it a bit harder to navigate because it uses a different file structure than what I am used to seeing in Go apps but once you understand it, the code is really nice. Finally we have Umbrel which unfortunately doesn't have a very good codebase. The code is messy and uses a lot of libraries for builtin typescript functions, additionally it is mostly frontend as the app deployment is handled with bash scripts which is not ideal for an app like this.</p> <h2>Conclusion</h2> <p>All in all, all apps have a lot of features and a really active community. If I were to pick the best I would choose Runtipi or Casa OS due to the well written code (less potential bugs) and active community. But on the other hand, if you would like to use Bitcoin related apps or fill your homepage with multiple widgets then Umbrel is your choice. It all comes down to your personal UI preferences and the feature set you like for an app like this.</p> Trying to make screen blanking work in labwchttps://doesmycode.work/posts/undefined/https://doesmycode.work/posts/undefined/Let me show you how I finally managed to get screen blanking working in Labwc.Wed, 26 Jun 2024 00:00:00 GMT<blockquote> <p>[!NOTE] As of 31/10/2024 this issue has been fixed, turns out it was an issue in the wayvnc server. <a href="https://doesmycode.work/posts/hyperpixel-raspberry-pi-os-bookworm">Here</a> is my latest post on the best setup with the hyperpixel screen and the new raspberry pi os with the labwc compositor.</p> </blockquote> <p>Sooo I use my raspberry pi 5 with a hyperpixel screen, but my setup has an issue, screen blanking doesn't work, that's a big problem! In this hell of debugging mess I will try to find a solution to make screen blanking work on the pi, or at least try.</p> <h2>The problem</h2> <p>Due to some problems with the wayfire compositor "not mapping pointer to something" (I don't really know what that means), when rotating the screen and trying to connect with a vnc server to the pi the mouse would be inverted, big problem. After opening an issue on raspberry pi forums an engineer told me to switch to the new beta compositor called labwc, so I installed the package, enabled it, rotated the screen, tried to vnc in and it worked! Vnc worked! But then I had another issue... Screen blanking wouldn't work, screen blanking is just turning off the screen after some time nothing else, it is a big deal for me and it was very annoying that it wouldn't work. I tried to get some help in the raspberry pi forums but unfortunately no luck there so I had to take matters into my own hands.</p> <h2>The initial investigation</h2> <p>So the first thing I tried to do is of course checking out how the screen blanking works. I checked the source code of raspi-config and found the options for enabling screen blanking on labwc and wayfire. The key difference is that wayfire has a built-in way of doing this using a config option, but on the other side labwc uses a combination of. the sway-idle and wlopm commands to control the display power mode. The sway-idle command is very simple it just runs a command after some time and another one when something happens so we are not interested in that. The wlopm command though is the actual thing controlling the display. So I launched a terminal run <code>wlopm --off DPI-1</code> and got the beautiful error message saying that it failed to set power mode, meh, I expected something like this. Then I quite gave up on it since I thought the wlopm command was the problem.</p> <h2>The second investigation</h2> <p>The second investigation was a bit more thorough, I tried looking at the wlopm source code but since I don't know C++ I couldn't understand anything, the next step for me was to open an issue on labwc itself. After opening the issue I was told to build labwc with some added logging so we can figure out what is happening, after 3 reflashes of pi os and trying to build labwc I was continuously getting link errors and I could not figure out a way to make it work, the investigation pauses for a second time.</p> <h2>Final investigation</h2> <p>The final investigation was the most in depth one as I took matters into my own hands. I firstly launched a new tty session on my pi and simply used the labwc or the labwc-pi (which is simply a polished labwc starter script adding only some extra styles) and tried to use the wlopm command there and to my surprise the display turned off when I run it. Amazing! So by launching labwc ourselves works but that doesn't make a lot of sense since lightdm already launches it with out user... So maybe it's lightdm? I looked through every single config files and enabled the highest level of logs I could set and tried to debug labwc to see if something is wrong but unfortunately nothing, no errors no nothing, we are back where we started. So at this point we know its not lightdm and probably not labwc? Let's make sure, I disabled the graphical user interface on raspi-config and when it booted up I simply run the labwc-pi command loading the desktop, then I tried running wlopm and to nobody's surprise it failed, I am really stuck now! Let's try using the wayfire compositor, I opened raspi-config, enabled wayfire, rebooted, opened a terminal, <code>wlopm --off DPI-1</code> and guess what, failed to set power mode! Wlopm is out problem here! But since I cannot fix it myself not knowing how C++ I tried researching for similar tools.</p> <h2>So close to the solution</h2> <p>After some research I found this awesome little tool called brightnessctl, it is a simple tool that can control the brightness of your display and luckily it is available on the debian package repositories. So I installed it on my pi, run it and the screen turned off! The screen blanking actually worked! I immediately replaced the old wlopm command with my brightnessctl command in the labwc pi, waited 10 minutes and the display turned off! Finally! My happiness didn't last long though as I then discovered that this command only works on displays supporting backlight control like DSI/DPI displays which is really disappointing, I essentially only found a solution for my screen... It is still a win though since screen blanking is working!</p> <h2>Conclusion</h2> <p>All in all I am quite happy my screen is not on 24/7 but I would be happier if that was the case for everyone using the labwc compositor which will soon become the default. I will keep investigating and hopefully I will figure out a better solution, or the pi engineers will...</p>