witcher's blog on wiredspace.de
https://wiredspace.de/
Recent content in witcher's blog on wiredspace.deHugo -- gohugo.ioen-us[email protected] (Thomas Böhler)[email protected]2020-2025 Thomas Böhler <[email protected]>Thu, 01 Jan 2026 20:33:48 +0100Flashing a Liatris Microcontroller
https://wiredspace.de/microblog/2026-01-01_flashing-liatris-microcontroller/
Thu, 01 Jan 2026 20:33:48 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/2026-01-01_flashing-liatris-microcontroller/Last year I built myself a Lily58 split keyboard after already having built a crkbd before. The fact that I never really finished the software is apparent by my local qmk_firmware branch still being called aurora-lily58-test.
For the Lily58 I used a Liatris Microcontroller, which is pin-compatible with the Pro Micro. Because I never finished writing any documentation for the keyboard, controller, or my layout, I slightly struggled to figure out how to flash it again.
So, maybe the following instructions help someone out. You can use either make or the qmk cli tool for it. Here is the official documentation for compiling and flashing the firmware for the latter. All instructions are given with my keyboard, controller, and keymap.
Compile the firmware using make splitkb/aurora/lily58/rev1:witcher CONVERT_TO=liatris or qmk compile -e CONVERT_TO=liatris -kb splitkb/aurora/lily58 -km witcher. Connect one half of your split keyboard and enter the bootloader by pressing the reset button twice while the keyboard is connected. Mount the newly connected mass storage device (if in doubt look for the MSD with dmesg) and move the resulting firmware file (make spits it out in your qmk_firmware worktree under splitkb_aurora_lily58_rev1_witcher_liatris.uf2) to the MSD, sync + umount and connect the other half of the keyboard again.
Success :)
Happy new year 2026!
Updated GPG public key for 2026
https://wiredspace.de/microblog/2025-12-15_updated-gpg-public-key/
Mon, 15 Dec 2025 11:41:44 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/2025-12-15_updated-gpg-public-key/My GPG key expired last month so I renewed it. It’s the same as the old one, except the expiration date was pushed to the very end of 2026. You can find the updated public key on my about page.
While at it, I also updated my security.txt.
Thoughts on running my own homeserver
https://wiredspace.de/blog/thoughts_on_running_my_own_homeserver/
Sun, 09 Jun 2024 09:47:30 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/thoughts_on_running_my_own_homeserver/Since finally moving out of “shared housing” and into “my own” apartment I’ve been thinking about finally setting up a homeserver, something I’ve been wanting to do for quite a few years now.
In preparation for this I’ve kept my old workstation when I upgraded to my current one late 2021. It’s one I’ve had since around 2014 and was getting a little slow, but I decided it will be more than enough for a homeserver at some point in the future.
Fast forward to a little less than a year ago, I purchased some missing parts for the server, mostly storage, and got it up and running with ZFS. I decided to cheap out on the boot storage so it’s currently running on spinning rust instead of flash storage, but I don’t really mind.
The server is equipped with an Intel i5-4690K (4C/4T), 16 GB of DDR3 RAM and a total of 20 TB of HDD storage (2x 2 TB + 2x 8 TB, each set of drives set up in a mirrored ZFS pool, bringing the usable storage space down to 10 TB).
It’s running Proxmox (using the 2 TB drives as boot and VM storage drives) with a TrueNAS VM to act as a, well, NAS (using the 8 TB drives as storage drives, passed through to the VM so it can manage them directly).
While idle, the server draws pretty much exactly 61 Watts, and while under load around 85 Watts. With the price of 29 ct/kWh at a constant 61W this would cost about 13€ per month in electricity. Calculating in around a quarter of the time under load, this goes up to about 17€ a month for electricity. This, of course, doesn’t include the cost of acquiring (some of) the hardware.
My motivation for wanting a homeserver in the first place was that I want to host my data at home, not in a cloud, and run services for myself. The former is done with the TrueNAS VM, the second with various VMs and Linux Containers (LXC).
Now, what services am I hosting exactly? Arguably, the most important one is an OpenBSD VM acting as a firewall and a DHCP and DNS server. It’s responsible for giving my other VMs static IP addresses and making them accessible through DNS. The firewall is responsible for setting up a subnet for all Proxmox machines as well as a DMZ for services that are or could be vulnerable for various reasons.
With basic infrastructure out of the way, other services I’m running are a Wireguard server, Jellyfin for media streaming (primarily audio and video), rss-email, a Minecraft server and a Mumble server. Of course, all of this is running for myself and just a select number of friends.
Almost all VMs and Containers are set up with Ansible Playbooks stored in a private repository. There is a huge trend in moving to NixOS and Nix for reproducible servers but, after trying it for about half a year before that, I ultimately dislike it. It forces me to use NixOS whereas Ansible allows me to use any distro of my choice that supports Python. There are other reasons I dislike Nix, such as the language and the absolutely terrible or non-existent documentation, but I won’t dig into that here. If you’re interested, there are more than enough blog posts about this.
Recently, I’ve been meaning to set up paperless-ngx, which I did do, but in the end I realised it’s a bit pointless for me as I don’t really need any of its features. It’s nice and all to be able to search through your documents, but I never actually have had to really do this in my life before and if I had to, I would find a specific document in about a minute as I sort my documents and don’t have that many to begin with. Thus I decided to not bother maintaining a paperless-ngx instance, at least until it’s actually worth it for me to do so.
While trying out paperless-ngx I couldn’t help but wonder if all of this is really worth it to me. I liked learning while setting all of this up and it’s nice to have my own server, but the server is essentially never under full load and I couldn’t help but notice the trend of hosting everything using containers (i.e. Docker, Podman, etc.) which I personally just can’t get behind and have no fun maintaining.
Ideally, I would like what I currently have but a little downscaled. Then again, that would cost much more money than I would pay for just the electricity bill so that doesn’t make a whole lot of sense.
In the end I’m happy that I do have all of this set up and will keep it around. There are still things to do, like migrating my Nextcloud instance or finally finding an alternative as I’m incredibly unhappy with Nextcloud. It also gives me a space to play around and try things which I value a lot.
Boycotting Twitch
https://wiredspace.de/blog/boycotting-twitch/
Wed, 21 Jun 2023 19:59:40 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/boycotting-twitch/After thinking about this for a few months now, I have made up my mind. Despite all the friends I made and the fun I had1, I decided to boycott Twitch for good. I won’t be watching streams on Twitch at all, blocking the site entirely, and I definitely won’t stream on there again.
The reasons are actually quite simple: Twitch, owned by Amazon, is exploiting its users on multiple levels such as income, privacy, and a healthy life. It’s made to be addictive2, keeping you on it for as long as possible, making you spend money on some of your favourite streamers, while in reality, most of your money goes straight to Amazon, not the person you wanted it to go to.
After Twitch draws its users in, they want them to stay, so they incentivize parasocial relationships, making a viewer think they’re doing so great giving money to their favourite friend streamer, when in reality it keeps a significant percentage of the money spent. It recommends building a community around the streamer, not so that the streamer has a community it can thrive with, but so that Amazon makes more money.
Money is what drives Amazon, and thus, by extension, Twitch. It is yet another example of capitalism in its pure form: Exploiting others to benefit oneself. Bezos sure would like another unimaginable sum of money, so he will get it, even if it means that individuals with poor spending habits cheer as many bits and gift as many subs as they can, all while looking the other direction when yet another streamer gets swatted or is the target of another hate raid.
The best thing about this is: You don’t have a say in any of this. You are driving the income of this company that really couldn’t care any less about you. It doesn’t care if you’re healthy; it doesn’t care if you’re being stalked; it just cares about money. As long as the users stay and money flows their way, in their eyes, they’ve done everything right.
Twitch has struggled3 with how happy the streamers as well as the consumers of the platform have been, with streamers being left alone on how to manage their channel, but they also can’t really switch to an alternative because of their community – not everyone will switch with the streamer, so they will make less revenue.
But oh, the money they could make; this is all that’s in the mind of the Twitch operators.
It’s disgusting. I refuse to support something like this, and that is why I’m deleting my account and stop visiting the site for good.
If you’re someone that streams on Twitch but really doesn’t make any money off it, consider using an alternative platform. Find a Peertube instance, or get together with some friends and host your own.
Thank you to everyone that I was able to share great memories with. I’m not gone, I’m just elsewhere…
I’ve started watching Twitch streams in around 2013, 10 years ago, and eventually ended up streaming myself speedrunning and getting to know some great people through it. ↩︎
Most big sites do this (see YouTube, Twitter, etc.), but that doesn’t mean it’s right. ↩︎
struggled is a strong word, the reality is closer to slightly noticed. ↩︎
On leaving Reddit for greener pastures
https://wiredspace.de/blog/reddit-api-drama/
Sun, 18 Jun 2023 08:25:54 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/reddit-api-drama/It’s been a few years since I’ve used Reddit myself, but through Louis Rossmann’s videos I’ve been following the current drama a bit.
I must say, I’m not the least bit sad or surprised about Reddit’s current situation. Quite the contrary: I’m happy about it. The site has been a shit show at least since they introduced the new Reddit. It became a sluggish, JavaScript ridden mess that I wanted to avoid using at all costs, and so I did.
Now that Reddit’s changes to their API pricing are through, moderators of various subreddits have started protesting by locking their subreddits. Reddit’s response was to play this down, saying it’s “noise” and it will pass like every other incidence’s response from the user base before, and that all will continue as planned. A few days after this, they forcefully unlocked the subreddits that were protesting.
To be frank, this is as far as I want to follow this. It’s a shit show, which is very much on brand for Reddit.
The reason I’m writing this is not that this is happening, it’s what the considered alternatives are. You’d hope that the users learned their lesson and would avoid trusting companies with a platform to speak and with their data, but no. It’s been getting worse, because the alternative that seems to be sprouting from this is… Discord.
How the FUCK does anyone go from Reddit, a link aggregator and forum, to Discord, a private chat platform. Not only is Discord owned by a company, just like Reddit is, and it’s not the slightest bit open source or user owned in any way1, it also has nothing, and I absolutely mean NOTHING to do with a forum.
The recent trend of open source projects building their communities on Discord has been bad enough for accessibility and searchability of information, but Reddit users trying to shove their forum into Discord is another nightmare completely.
I think at this point in my blogging journey my readers are aware that I despise Discord with a passion and wish a quick and painful death upon it and other platforms like it that do or will exist in the future. Possibly seeing it grow pisses me off to such a degree that I felt like contributing to the noise that is the Reddit drama. But what makes my blood boil even more is the user’s apparent incapability of learning from their mistake and choosing a platform that benefits them and doesn’t use them as a way to gain more money2, not a company and its shareholders.
If you are a user of Reddit and want to follow the community, recommend truly open alternatives, like Lemmy, to the community. Be at the forefront of the possible move to a better platform and avoid making the same mistakes as before at all costs. Moving to yet another platform where the user isn’t in control of their data, particularly Discord, is a mistake, and this decision will bite Reddit users in their asses again in the future.
Lastly, if you, dear reader, keep using Reddit for no good reason, you should really turn to yourself and ask yourself if you want to support the way that this company is treating you and others. Is this really, truly something you want to voice your opinion for, or do you want to see others and yourself some place better in the future? If you want your community to thrive you should leave Reddit at once, and find refuge in greener pastures. Your community isn’t bound to a platform, and this is the best time to move on.
Don’t blindly follow the majority in their decisions on what platform to migrate to. Form your own opinion on this matter by researching the alternatives and make a change for the better. Voice your opinion for a free, community owned platform that can thrive far into the future. That is my plea to you as a supporter for a free and open internet.
Never mind Discord not supporting End-to-End encryption. Reddit users that migrate to Discord will use it for private messaging, this is inevitable, and Discord will gladly read along their private messages and intently follow their private calls. What a world to be alive in where this is considered okay. ↩︎
Oh, capitalism. It’s amazing how we, as a society, view the selfish people at the top as saints, while we are being exploited by those same saints. Viewing capitalism as the only viable way to live is lying to oneself. ↩︎
Password Managers: Part 2
https://wiredspace.de/blog/password-managers-part-2/
Sat, 18 Feb 2023 21:12:14 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/password-managers-part-2/Around half a year ago, I wrote an article about Password Managers. Since then, a few things have changed, so I wanted to write down my reasons for the current situation.
If you don’t feel like reading too much of this, my current password manager of choice is KeePassXC. For the people that would like to continue reading, here are my reasons for using it.
At the time of my last article about password managers, the one I used was Bitwarden, although I was already looking for a different solution, with the alternatives being pass and KeePassXC. Ultimately, even though Bitwarden is open source, its API isn’t, as far as I know. This makes tooling be in a weird place, as the API has to be reverse engineered, and I’m not aware of any documentation on it. I mentioned rbw, a commmand line tool to interact with Bitwarden, which is nice to use.
In contrast, pass is completely open source and easy to understand. It’s a bliss to use pass since it’s simply a small script that wraps GPG to store secrets. What I love about this is that if you’re ever having problems they’re easy to debug.
On paper it’s the best solution for me so far as I value the simplicity and ability to reason about as well as debug it… if you don’t count the fact that it’s exposing a bunch of metadata to anyone who might have the vault fall into their hands one way or another. The way I would store secrets, being in a tree with paths like example.org/username, this would not only give a hint where I have accounts, but also what the username for this account is. Personally, this is a bit too much information for me, and I don’t want to store secrets any other way while using pass, so I decided not to use it, even though everything else about it sounds good.
Lastly, KeePassXC. It’s the only KeePass desktop client that I think is really viable to use, so that’s what I’ll be talking about.
KeePassXC is a lot more complicated, having a custom database format, but the database is also way more expressive than that of pass. There are attributes (standard ones like username, password, etc., but also custom attributes one can define), notes, even attachments if that’s something you want.
What really convinced me to start using it again and keep using it over pass was that no metadata is leaked. You have a database file and that’s it. Everything inside, including what accounts you have, is hidden and encrypted.
In order to use KeePassXC like I would use pass to get passwords, usernames, etc., I wrote a little script that fetches that info for me using the keepassxc-cli(1) tool that is shipped with KeePassXC.
#!/bin/sh database="$1" shift password=$(secret-tool lookup database password) choice=$(printf "%s" "$password" | keepassxc-cli ls -Rf "$database" | sed '/\/$/d' | dmenu -i) printf "%s" "$password" | keepassxc-cli clip $@ "$database" "$choice" The integration with secret-tool(1) makes it possible to look up the database password from within the database, having created an entry for it, which is used to unlock the database. The script lists all entries in the database, filters out the folders, and makes a selection of these available through dmenu(1). Upon selection of an entry, the entry is then queried for an attribute. This is given on the commmand line when calling the script – Passing no arguments gives me the password of an entry, --attribute username gives me the username, --attribute totp gives me a generated TOTP key. Setting up keybinds for these makes it super simple to get each attribute.
In the future I want to do more to keep my passwords safe, though. My passwords are still accessible on the internet, even if they’re still behind a username and password. Ideally I would set up a small home server in the future, something like a Raspberry Pi would already be enough for this, and store my passwords on there, without access from the internet at all, minus a Wireguard interface to access my home network. This feels like the most secure method to use my passwords, so that is the plan.
Why I use this Microblog
https://wiredspace.de/microblog/why-i-use-this-microblog/
Mon, 06 Feb 2023 14:05:57 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/why-i-use-this-microblog/With Twitter’s demise and Mastodon’s rise in popularity, I must admit I’ve been feeling drawn towards Mastodon, even though I deleted my Twitter account shortly before this whole fiasco began.
The reason was that I posted and browsed Twitter mindlessly, wasting mine and other people’s time, and nothing good became of it. Most of the time I felt like I wasted the past half hour scrolling through my feed, sometimes I was even angry at Twitter and the community when I was done browsing.
So I decided to delete Twitter and have none of it anymore. And the choice is one I’d make again without hesitation.
I feel the urge to get a Mastodon account, but I have to realise what the reasons are that I quit microblogging in that way in the first place, and that Mastodon will be no different than Twitter. Different technology, same purpose.
The reason I started making this microblog instead is that I would have to think about what I want to write, at least to some extent. I can’t come here when I’m bored and post my way out of boredom; I like putting the slightest bit of work in my writing here.
That is why I use this microblog and not Mastodon.
Off-site Backup Solution
https://wiredspace.de/blog/offsite-backup-solution/
Sun, 05 Feb 2023 11:41:55 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/offsite-backup-solution/Quite a while ago I set up a backup solution that works for me but isn’t ideal. I have an external hard drive that I plug into my workstation, run my usual backup command using restic, and disconnect it again.
This works, but it’s not handy. Getting the drive, connecting it, running the command, waiting for it to finish, ejecting the drive, and unplugging it is tedious and so I only do this about once a month.
Add to this that the backup lives essentially right next to the workstation, so it only protects me from malware or oopsies on my end. If my workstation ends up in flames, the house collapses, a thief breaks in, or similar, the backup is of no use.
As a “soft backup”, a lot of my important files are on my Nextcloud instance. I call it a “soft backup” because this is live data, not a backup, but it can act as one; although this is not recommended.
A few days ago, I set up a backup solution that uses off-site storage to store my backups. Precisely, it’s the Storage Box BX11 from Hetzner, which gives me 1TB of storage for a little less than 4€ a month. Crucially, it supports SFTP, a protocol that restic can use as a backend to interact with a repository.
After a long first backup I now feel like my data is safe. It’s not perfect, but it’s good enough.
What I still have to figure out, though, is how I guarantee that a backup is made daily, because that is what I want. I can’t just create a cronjob for 3am because I shut my workstation down if I don’t use it to not waste any power. But I also don’t use my workstation at the same time each day, so I really can’t ensure that a backup is run daily with cron.
Alternatively, as I am using Arch Linux which comes with systemd, I could set up a systemd service that fires when my workstation boots, but since I don’t want to use systemd and it could be that I start the system more than once a day, this doesn’t seem like a good option.
Again, I still have to figure out how to automatically run the backup, but for now, just running my script is fine:
#!/bin/sh -eu exec >> /var/log/backup/remote.log 2>&1 date # backup home directory to external server using restic. # steps: # 1. backup # 2. keep last x snapshots . "${XDG_CONFIG_HOME:-${HOME}/.config}/backup/remote/config.rc" backup() { printf "Backing up ${BACKUP_DIRECTORY} to remote ${REMOTE_HOSTNAME}\n" restic "${RESTIC_OPTIONS}" \ backup \ --exclude-file "${EXCLUDE_FILE_PATH}" \ --files-from "${FILES_FROM_PATH}" \ "${BACKUP_DIRECTORY}" } forget() { printf "Running forget, keeping last ${REMOTE_FORGET_KEEP} snapshots\n" restic "${RESTIC_OPTIONS}" \ forget \ --keep-last="${REMOTE_FORGET_KEEP}" } backup forget My configuration file looks like this:
REMOTE_PROTOCOL="sftp" REMOTE_USERNAME="user" REMOTE_HOSTNAME="backup.example.org" REMOTE_URL="${REMOTE_PROTOCOL}:${REMOTE_USERNAME}@${REMOTE_HOSTNAME}" REMOTE_REPOSITORY_PATH="./workstation/backup" REMOTE_FORGET_KEEP="30" CONFIG_HOME="${XDG_CONFIG_HOME:-${HOME}/.config}" CONFIG_PATH="${CONFIG_HOME}/backup/remote" REPOSITORY_PASSWORD="supersecretpassword" EXCLUDE_FILE_PATH="${CONFIG_PATH}/excludes.txt" FILES_FROM_PATH="${CONFIG_PATH}/files_from.txt" BACKUP_DIRECTORY="${HOME}" RESTIC_OPTIONS="--quiet" export RESTIC_REPOSITORY="${REMOTE_URL}:${REMOTE_REPOSITORY_PATH}" export RESTIC_PASSWORD="${REPOSITORY_PASSWORD}" Feel free to use both if you have any use for them.
Update 2023-02-06 I have now successfully set up daily automatic backups with anacron(8), which works like cron(8) but doesn’t assume that the machine it is running on is running continuously.
Thanks to rkta for mentioning this!
Nextcloud Client takes a while to start
https://wiredspace.de/microblog/slow-nextcloud-client-startup/
Sun, 29 Jan 2023 18:35:06 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/slow-nextcloud-client-startup/I’ve had a Nextcloud instance for about a year now and I was always wondering why the Nextcloud client on my workstation starts up so slowly, while it doesn’t even take a second on my laptop.
As it turns out, the problem was my local folder synchronisation setting. On my workstation the whole Documents folder is selected to sync, but only a few subfolders actually have synchronisation activated. On my laptop, each folder has it’s own synchronisation entry in the settings.
After using strace(1) to figure out what the problem might be, it was apparent that the client was scanning folders that aren’t even selected for synchronisation. In addition, it looks like it struggles a lot with small files, and having a bunch of git repositories in there didn’t really help.
Removing the large amount of small files was enough to make it faster, but it’s clear that the ideal way to solve this is to set up a connection for each folder separately, which is what I’ll do in the future.
Speedrun.com, its issues, and about building an alternative
https://wiredspace.de/blog/a-speedrun-com-alternative/
Wed, 18 Jan 2023 21:09:46 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/a-speedrun-com-alternative/The speedrunning community is big: Speedrun.com notes that, at the time of writing, 3.5 million speedruns have been registered on the site by 1.4 million users.1
Speedrun.com is the website for all your speedrunning needs. It sports a database of runs grouped by games, with groups of categories per game, a forum, news, and a “streams” section which lists speedrunners that are currently live. It is the most used platform by speedrunning communities and the first stop to building a new one.
But I, and many others for that matter, have issues with speedrun.com. Here’s a short list of people in the community complaining about the acquisition of speedrun.com by Elo:
https://nitter.unixfox.eu/PinkyNoice/status/1514718536383541254#m https://nitter.unixfox.eu/Corvimae/status/1514366975174135811#m https://nitter.unixfox.eu/OrrinAccount/status/1423468217788424192#m https://nitter.unixfox.eu/Glitchman24/status/1421056722249023490#m https://nitter.unixfox.eu/levelengineSR/status/1413955311938023424#m https://nitter.unixfox.eu/fGeorjje/status/1413694359594229760#m This is by no means a complete list. Feel free to have a look yourself.
I think the last post in the list above is especially interesting: Since the acquisition of speedrun.com by Elo it supposedly has changed the license of leaderboards which were previously licensed under CC-BY-NC without first checking in with users. Despite this obviously being unethical as well as probably illegal2, the company did this without remorse.
Other than that, a bunch of new changes hit the site when it got acquired, like it now being filled with ads and questionable UI decisions, the latter of which seem to have been addressed by now, at least, after a huge amount of criticism and backlash by the community.
The site is heading into a corporate direction, ignoring the user and putting the well-being of the company first. Even though speedrun.com previously was closed source, it still cared about its users more than the current owners. It was good enough, which is why I think there have been no alternative solutions in development at all.3
But the time to develop alternatives is now, and these alternatives should be easy to use, robust, and put the interest of the user first, among other things.
I used to speedrun games and frequent the page myself, but I haven’t been active in the community for a while; this coincidentally started around the time the website received the overhaul.
The one big issue I had with speedrun.com even before it was acquired by Elo was that it wasn’t open source, and it still isn’t.
“Why should it be open source? Why should I care?”, I hear you ask. The reasons are rather simple: It not only ensures the interest of its users being put first but it also promotes extensibility, customizability, and interaction with the community, making changes to the site be discussed with the community instead of made behind closed doors.
And one more thing: It prevents the site from going rogue. If there’s a feature you don’t like, spin up another instance yourself, or even fork the code. It’s your (and the communities) choice to be made. Nothing is being forced down your throat, you, the user, as well as the community, are in control of what happens and what doesn’t.
Because of the reasons listed, among others, I propose this: An open source, distributed alternative to speedrun.com that can be hosted by each community and gives users a standardized interaction while also giving communities the freedom they deserve.
The philosophical aspects To make sure a project like this becomes successful it needs thorough planning. Most importantly, these guidelines should be followed when designing the software:
The interest of users stands above all else The data on these instances should be available to everyone, guaranteeing an open nature of the community Submitting data should be easy, and reviewing this data should be possible Several instances should be able to interact with each other in one way or another to further interaction between communities The software should be as lightweight as possible, not require modern hardware to run an instance, view data, or browse the website The data accumulated by an instance should be viewable in several formats, like, but not limited to, human readable and machine readable formats The data should be able to be exported by everyone, including the hoster, administrators of an instance, and logged-in as well as logged-out users Migration from speedrun.com and other websites should be as straight forward as possible To enforce these guidelines, the software should be licensed under the GNU Affero General Public License (AGPL)4, which expects a host to make the hosted source code available to the public, and forks of the project to be open sourced under the same license as well.
Additionally, it should answer the following questions:
How will users interact with the site, both in the short and long term? Will users possibly need an account on each instance of the leaderboard, or does one account suffice for interaction with other leaderboards? In what way will the software guarantee that the interest of users stands above everything else? Lastly, every successful piece of software needs to establish non-goals for the project:
The project should not fulfil more needs than necessary, i.e. a forum can be hosted alternatively in an open fashion Hosting of other forms of data, like videos and other resources, should not be hosted by the project but elsewhere While these guidelines, questions, and non-goals are a recommendation for a new project that solves the issues of speedrun.com, these are not carved in stone. Adjustments to each point can be made, but do keep in mind the open and lightweight nature the software should have. Each decision to not follow one of the points should be explicit and written down for others to comprehend.
Development The development of this new software should involve the community the software is being developed for and development should happen publicly, preferably on an open platform. Ideally, the software should be developed by people that are part of the community which have a good connection to it’s members.
All the points listed in the philosophical aspects of the project should be kept in mind while developing this project.
Personally, I would like to develop such a project, so if you’re interested in developing or helping develop this, too, then make sure to reach out to me publicly and maybe we (and others) can figure something out.5
I sincerely hope the speedrunning community can be made more accessible and open, for the good of the individuals and the community as a whole.
Start contributing to this by writing a blog post yourself about these issues, talking to fellow community members or by starting to develop a project like this yourself.
I know that the speedrunning community is very much able to develop a project like this6, so let’s work together and create an alternative to the platform that disregards our wishes.
Other issues worth addressing Not only is speedrun.com an issue: Other platforms used in the speedrunning community suffer from the same issues as speedrun.com does. Free software is rarely seen here, with the main platforms in use being Discord, Twitch and YouTube, all of which are proprietary, non-free platforms that do not respect the user.
I think, as a community and as individuals, we can do better, and we should. Let’s make the speedrunnig community an open one, void of companies taking our rights and ignoring our voices.
https://www.speedrun.com/knowledgebase/about ↩︎
I am not a lawyer, I cannot verify that this is the case. Do your own research on this. ↩︎
At least I’m not aware of any. If you do know of alternative solutions, please let me know and I’ll add them here. ↩︎
I can’t and don’t want to dictate what license a project should use - I just think this is the most sensible option to choose. ↩︎
You can find my contact details at the bottom of this post. Use my public inbox to reach me publicly, or my private email if you prefer a private conversation. ↩︎
I’m especially thinking of OpenGOAL, a compiler for GOAL that makes it possible to compile “Jak and Daxter: The Precursor Legacy” for x86_64 which allows for playing the game on PC. ↩︎
Self-Hosting Soju
https://wiredspace.de/blog/self-hosting-soju/
Tue, 17 Jan 2023 21:06:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/self-hosting-soju/Soju is an IRC bouncer that is easy to host and use. I was introduced to it from chat.sr.ht, Sourcehut’s hosted IRC bouncer available for paying customers.
Recently, Sourcehut had an outage. I didn’t mind this, it didn’t impact my workflow at the time, and it was also a fairly short outage that was addressed and resolved quickly, but it reminded me that other services do have down time from time to time, so I decided to start hosting Soju myself, on my own server.
The setup was refreshing from having worked with znc. Admittedly, I don’t remember any of the details, but I remember the setup being painful1.
Hosting my server with Alpine, the setup was easy. Since my server was still running on v3.15 and Soju is only available in the testing repository on edge, I first had to update to edge. This was very easy, only requiring an update of the version number (or in this case, setting it to edge) in /etc/apk/repositories, as well as a pair of apk update and apk upgrade --available.
Installing Soju, then, was extremely easy, only requiring the installation of the soju, soju-openrc, and, optionally for documentation only, the soju-doc packages.
The setup was a little bit harder, though. The reason was that by default Soju on Alpine looks for the sqlite database in /var/lib/soju/main.db2, something I was not aware of. Following the installation instructions that only mentioned using sojuctl create-user username -admin was a little misleading with this exact installation procedure, but I eventually figured out that calling this would create a soju.db in the current directory, which is not what I wanted.
After figuring this out (as well as setting up a subdomain and creating the necessary keys for TLS), everything worked flawlessly.
The user configuration (networks, etc.) works by messaging BouncerServ. Using network create with the appropriate parameters set up the networks, and joining channels worked properly. Use help to list all available commands and help <command> to get help for a specific command. Alternatively, the documentation for these commands are available in soju(1).
A small gotcha I had was that I was still logged in to all the networks on chat.sr.ht which forced my bouncers to use my nick with an _ added at the end. This was really a non-issue, though, and simply logging out from chat.sr.ht fixed this.
If you’re looking for an IRC bouncer to host, I can only recommend using Soju, it’s extremely simple to use.
Edit 2023-12-28: On startup, soju loads, among other things, the certificates in memory. These will, hopefully, get renewed for you, but soju won’t be aware of this.
Luckily, soju reloads the certificates when it receives a HUP signal. When I renew my certificates, I simply send it a HUP now to reload the certificates!
Thanks to Max for informing me of this!
Hosting other Sourcehut services In the future I’m looking to self-host a bunch of Sourcehut services myself. I have tried this before but ultimately failed at setting up the mail server as I hadn’t researched enough for this yet.3 It’s very much out of my comfort zone, never having worked with mail servers before, and postfix simply looks scary to a newcomer, given the huge amount of configuration and things you can do wrong.
Configuring postfix would also give me the ability to possibly self-host email entirely, given that I figure out how to make big-tech accept my email, if possible.4
Being able to self-host Sourcehut and link to my personal instance for my projects would be nice, and I hope I can manage to find some time in the future to set this up myself. For now, though, I’m happy with Sourcehut and will continue to host my projects there for the foreseeable future; at least until I’m done with exams, my bachelor thesis, and other things coming up.
Whether this was due to inexperience or a harder setup in general, I don’t remember ↩︎
This is configurable via the configuration file, located in /etc/soju/config by default. More information can be found in soju(1). ↩︎
If you’re reading this and have experience in setting up an email server, or even setting up a Sourcehut instance, help would be highly appreciated! Just shoot me an email. ↩︎
Even though I would love to self-host my email, not being able to communicate with what is probably most of the world because I’m getting blocked on the basis of nothing but “you are not a big, established company” is definitely a no-go. ↩︎
Pixel 6a with GrapheneOS
https://wiredspace.de/microblog/pixel-6a-grapheneos/
Mon, 28 Nov 2022 11:46:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/pixel-6a-grapheneos/As stated in a recent blog post about Using a OnePlus 5 in 2022, I had a few issues and decided to choose using a different, newer phone.
This new phone is the Google Pixel 6a, produced by a company I despise, bought second-hand to minimise the money it will receive from my purchase. The reason behind choosing a Google Pixel phone was that they are the only phones that GrapheneOS, a security-focused Android ROM, is available on.
While I have used LineageOS for a considerable amount of time, it’s not an Android ROM that I want to use. The reasons for this are simply that a lot of things break and I’m unable to easily recover from this. The most prominent example of this was when I was forced to use a year old build of the ROM without being able to update from within the phone. Updating would have cost me an entire evening I wasn’t ready to spend.
I will not go into more detail here, but this has just been my experience. If you’ve had a better one, that’s great!
Another reason for choosing GrapheneOS, besides not liking LineageOS, was the strong focus on security that corresponds with my desire for privacy in one specific aspect: Google Services, which are, realistically, almost required to use an Android phone properly these days, are by default run sandboxed and don’t have any special permissions on the phone. You as the owner of your phone are allowed to chose what data these services get to use, not Google.
Naturally, I revoked almost all the permissions that the Google Services wanted and refuse to log in with an account to minimise the amount of data collected on me.
Using my Pixel 6a with GrapheneOS has been great (well, except the absolutely horrible fingerprint sensor) and I hope this phone will serve me many years to avoid me having to waste another 300€.
rss-email 0.2.1
https://wiredspace.de/microblog/rss-email-0.2.1/
Sun, 20 Nov 2022 18:36:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/rss-email-0.2.1/First off, thanks to Hugo for submitting a patch implementing asynchronous fetching of RSS feeds for rss-email that sparked the bit of interest I needed to continue working on the project in the first place.
Thanks to this, I decided to prepare rss-email for a 0.2.0 release by replacing Diesel with sqlx, a crate that enables compile-time checked SQL queries. A complete ORM always felt a bit overkill for this, but this was the project I decided to try Diesel on to get familiar with it. Discovering that Diesel is not able to be statically linked was enough of a push to replace it with sqlx.
Besides that, jman, who submitted a patch quite a while ago, replaced OpenSSL with rustls to reduce system dependencies.
In total, not a lot of changes have been made to rss-email, but they should be quite significant nonetheless:
Replace OpenSSL with RustTLS Implement async fetching Replace Diesel with sqlx These changes allow faster execution of rss-email and finally allow rss-email to be run on musl-based Linux distributions!
This also means I can finally run it on my own server running Alpine Linux.
As I introduced a bug where inserting an already existing entry into the database and timestamps not being present on feeds would panic, I promptly released 0.2.1, which fixes these mistakes.
The latter could have easily been avoided by avoiding the use of unwrap in the code (something I made use of heavily while prototyping the project), and fixing unnecessary calls to unwrap is a priority going forward: https://todo.sr.ht/~witcher/rss-email/13. Well, another lesson learned.
Thus far only RSS feeds work with rss-email, but implementing Atom support is planned.
If you feel like contributing, feel free to either close one of the existing tickets or send an E-Mail to the development list discussing your ideas.
Starting now, announcements for new releases will only be published on the dedicated mailing list in order to not spam my personal blog.
Struggling to find time
https://wiredspace.de/microblog/struggling-to-find-time/
Mon, 14 Nov 2022 20:30:23 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/struggling-to-find-time/I wanted to give a quick update on my current situation and why it is that I’m absent so much, both in FOSS and in private.
Besides finishing my full-time Bachelors this semester and my part-time job at the same time, I find it difficult to work on FOSS in my free time. This should clear up mid next year the latest, if not sooner.
If projects of mine seem abandoned, this is because I currently just can’t find time to work on them effectively.
As far as patches for those projects go, I will see to it that they’re applied without too much delay as much as I can.
Feel free to still contact me, though, for any reason whatsoever! It might just be that I’m a little slow with replying.
Using a OnePlus 5 in 2022
https://wiredspace.de/blog/using-a-oneplus-5-in-2022/
Sun, 13 Nov 2022 18:13:52 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/using-a-oneplus-5-in-2022/End of 2017 was once again the time to get a new phone. OnePlus was loved back then for their cheap phones with have comparatively amazing hardware to the more expensive smartphones during that time. This has changed radically since then and their phones are almost the same price as other flagships.
I remember paying around 550€ for it and was very happy with my purchase until 3 years later, end of 2020, that I decided it’s time to get a new phone.
A while earlier I started using my phone for less things, trying to cut down on usage a lot. I wanted a phone I can’t mess around with1, which is where Apple comes into play.
Apple caught my eye for the “ecosystem” where everything is supposed to work together flawlessly, with the downside of being locked into the Apple ecosystem2, along with the lack of customizability.
Both the lack of customizability and the “it just works” philosophy made it sound like the ideal candidate for a new phone, so I decided to pick the cheapest one, the iPhone SE 2020, and buy that. It set me back around 450€.
As soon as it arrived I already found the first major issue with it: The battery is laughably tiny and the phone can’t even last a day of normal use. This would be one of the main reasons why I would decide to use my OnePlus 5 again a few months ago, along with what I thought would be a good idea not to have: The lack of customizability.
Lack of customizability doesn’t only mean that you can’t play around with your home screen to a degree that Android allows you to, it also means that as soon as you want to escape the limitations of the Apple ecosystem, e.g. wanting to handle data offline with an app, you run into issues.
For a while, I used Todoist, an overpriced service for tracking todo items, until I decided that I do not want to pay for a todo service and also don’t want my todo items to be on The Cloud(TM). Instead, I wanted to opt for one of the simplest todo applications out there: Todo.txt
Todo.txt stores all of your todo items in a text file which is easily parsable and can be edited by hand. This text file should then be synchronised through my Nextcloud instance.
I expected this to work without any issues as this involves the app for managing my todo file only writing to and reading from a text file that is seemingly stored on the phone. In reality, the file was synced with my Nextcloud instance automatically and exposed through the native files app on iOS.
This is where the issues started being really annoying. In order to get changes on the file that have not been made on the device I had to open the Nextcloud app, navigate to the file, and synchronise it manually. If I would not do this before opening the todo app at all it would override the remote file with the local, cached one. I imagine I don’t have to explain how frustrating this can be, especially when losing a big number of new todo items.
Needless to say, I’ve had enough after around 1 1/2 years of using my iPhone and the god awful OS it is running and just go back to my old OnePlus 5 running Android. I decided I would just install LineageOS with MicroG on it and call it a day. No more big customizations, this is a phone that I want to use, not to play around with.
And honestly… it worked much better than I expected, because what I expected was a slow running phone struggling to run simple apps. This was not the case and the only time it is chugging is when I am running OsmAnd, a navigation app using OpenStreetMap, which is fine as I’m not interacting with it while driving.
It was good that I switched to Android as a while later (ca. end of October) I decided to contribute to OSM myself, hoping that one day it might be viewed as a viable alternative (which it already is in my opinion) to proprietary services like Google Maps or Apple Maps.
In order to use OpenStreetMap, though, I would obviously need GPS. This was quite a struggle in the first day as the NLP location providers I was using seemed to not be able to locate me at all.
After this magically worked after one or two days, I was ready to give mapping a shot. This worked quite well, but I will not expand on mapping for OpenStreetMap further here.
While using my phone daily for day to day tasks, I noticed some things that bothered me, too. One of those things is the phone going to sleep but not waking up for reasons unknown to me. The solution to this has always been to restart my phone by holding the power button down until it forcefully resets.
Another thing that started bothering me is that when I receive calls the callers can always hear themselves. The fix for this is always calling them back, adding some friction to what should be a simple conversation channel.
Along with the issue of effectively not being able to accept phone calls I also had the issue of my signal being horrible compared to the iPhone I used. I wasn’t able to get any signal where I’ve been struggling before with the iPhone. Due to my living situation I tend to not be reachable with a normal phone call quite a lot, which is obviously not ideal.
But, this isn’t all I wanted to talk about. I’ve had another issue come up just earlier that’s so unbelievably frustrating that I’m thinking about throwing this phone in the bin. As Android allows the user to install a third-party keyboard and I think the standard one is a little lacking, I installed a new keyboard to use.
Third-party keyboards aren’t usable when the phone starts but isn’t unlocked yet, causing Android to fall back to the standard AOSP keyboard to unlock the phone.
Here’s my issue: What happens if this is not the case? This happened to me now, and I am unable to unlock my phone. I cannot use my phone because of a bug that I ran into several years ago already with this phone. I can’t tell whether this is the phone’s fault, the build of LineageOS or just the underlying AOSP kernel. Needless to say, currently my phone is unusable and I might have to fall back to the iPhone for the time being and while I decide whether I buy a new phone.
Speaking of a new phone… What would I even buy? The Android phone market feels like it’s overflowing, but there doesn’t seem to be a phone out there that is decent and affordable without selling a kidney.
I’m looking for a cheap Android phone that I can use to make calls and use messenger apps on - nothing more. No flip feature, no fancy lidar sensors on my camera, nothing. Just a phone that can do a little more than a dumb phone.
Since I’m pretty convinced I won’t find a phone like that I’ve been thinking about buying a used flagship that is still semi new. The first one that came to mind is a Google Pixel 6a so I can run CalyxOS, a privacy respecting and secure Android ROM that is only available on Pixel hardware due to security constraints with features that only Pixel hardware supports at this moment.
Even though I can find a Pixel 6a for a little more than 300€, I still think this is a bit much for a phone I use for barely anything anymore - but it’s not a terrible price.
No ROMs like LineageOS and no pointless hacking on the operating system on any level. I wanted my phone to be purely functional in nature. ↩︎
It’s usually fairly difficult to switch away from using Apple services because it’s natural for a user to use all of their services instead of services that are platform independent. ↩︎
Signal is getting worse
https://wiredspace.de/microblog/signal-is-getting-worse/
Mon, 07 Nov 2022 21:28:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/signal-is-getting-worse/Signal just recently released a new update adding stories, akin to the feature of the same name found in WhatsApp et al.
It sports the usual upsides of Signal: E2E encryption and privacy by default. It even allows you to disable it outright, a feature which I’ve been wishing for forever with WhatsApp, while I still feel little forced to use it for now.1
Stories are not a feature that I need or want. The fact that you can disable them is nice, but it’s still clearly showing a direction Signal is going in, even though I can’t quite put my finger on a definition of this direction.
There have been a few changes made to it that I don’t agree with: adding Payments (cryptocurrency) to and removing SMS support2 from Signal.
Having an experience similar to Apple devices with iMessage and SMS but on Android and with a messenger that is actually respecting the privacy of it’s users was quite nice. Try sending them a Signal message, but fall back to sending an SMS should they not have it.
Removing the SMS feature is quite sad since it also possibly made the switch for Android users that don’t care about private messengers3 easier as it’s “more than just a messenger”, meaning it’s more useful.4
In general, I’m not agreeing with the direction Signal is heading towards. This would be a scenario where I’d like to switch messengers, or at the very least change the app I’m using, but this is not possible since there’s no viable alternative messenger5 or even another app, as Signal disallows third party apps altogether, meaning I’m stuck with it, whether I like it or not.
Funnily enough, this is a similar situation I find myself in with WhatsApp, although Signal is much less evil and so I have less of a desire to ditch it.
In the end, Signal is better than Messenger X, so it’s still the best option out there for now.
Ultimately, I want to ditch WhatsApp and have been wanting to do that for quite a long time. It might be time to pull the trigger soon. ↩︎
SMS support has not been removed as of writing this post, but has been announced. ↩︎
Commonly referred to as “normies”. ↩︎
I’m not arguing this is the case, but if it gets people off of worse messengers I don’t mind it. ↩︎
Matrix comes to mind, but it has a very low adoption rate, even in tech circles. ↩︎
Messenger X won't fix anything
https://wiredspace.de/blog/messenger-x-wont-fix-anything/
Sat, 10 Sep 2022 21:22:00 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/messenger-x-wont-fix-anything/I’ve been annoyed by the state of smartphones lately, and one of the top contenders in “Most Annoying” is instant messaging. Currently, the options for instant messaging outside of communities like tech (IRC), gaming (Discord, which has a bunch of other problems), etc. is an app.
At the top of this list have been WhatsApp, iMessage, Signal, and Telegram, to name a few. What similarities do all of them have? Requiring a phone number to sign up.
It’s not just about the phone number though: In Europe, you’re almost forced to use WhatsApp to be part of a social circle. It doesn’t matter if it’s just talking to someone quickly over a text, organising meetups or catching up with an old friend: it’s all done over WhatsApp. I’m sure this applies to other regions as well, just replace WhatsApp with iMessage, for example.
The situation has gotten a bit better with the recent adoption of Signal, but even though Signal is viewed as the saviour to the mentioned services (and it is substantially better due to it being open-source and implementing an open standard, for example), it really isn’t, because it doesn’t fix the fundamental mistakes that instant messengers have had in the past years: It requires you to have a phone (and a phone number) to use it.
Requiring a phone (number) to use a messaging service is… weird. It heavily ties the act of instant messaging to a phone – and simultaneously excludes people that don’t have smartphones from the massive amount of social interactions these messengers see.
But, it doesn’t stop there. Other problems include Signal being centralized and not allowing other implementations than the official clients. You’re not allowed to spin up your own server for decentralized communication but you are being tied to the official Signal servers, making these a single point of failure. In addition, the fact that 3rd party clients are not allowed means you’re forced to accept the decisions that the developers of Signal make; may it be that you can’t set a custom status message or cryptocurrency being baked into the clients. You have no control over this and you have to eat bad decisions up, just like you have with the other messengers.
How do we solve this issue, though? What’s the correct way to ensure that the next instant messaging client doesn’t suffer the same downsides?
Contrary to what a lot might believe, it’s not another app. Instead, it requires a proper protocol to be defined, which supports at least the following features:
Not tied to a phone and/or a phone number Minimal but feature rich enough to see wide adoption Decentralized servers (possibly Peer-To-Peer) End-to-end encryption Allows 3rd party clients These should be the most important ones, for the following reasons.
To support these features means being unable to be censored, keeping the user be the focus of the app (in contrast to a company), allowing a rich ecosystem of implementations with different look and feel (as well as system requirements), and not requiring users to own specific hardware to interact with other people.
To my knowledge, Matrix has been the latest (popular) attempt at this, but it’s falling short of a few things. As someone that doesn’t use the protocol enough, encryption just seems too complicated when it should be transparent and the user shouldn’t have to think about it. In addition, the only viable client I’ve seen is Element, a client heavily based on JavaScript and specifically Electron for a desktop app, which makes the client feel sluggish, especially on older hardware.
Adding to this, the only server implementation I’ve seen is written in Python and, supposedly, is very taxing to run, especially on small servers.
Matrix, so far, has been a good try, but it has fallen a bit short on some things. But, I could be wrong, and I’m open to being told better!
The state of instant messaging is not a great one and can be greatly improved upon, but it’s a big undertaking that needs multiple people to work on. And even then, adoption is not guaranteed. It’s by no means a trivial task and takes a lot of work to do, but the outcome is one that’s undoubtedly wonderful for everyone involved, creators of any kind as well as end users.
Personally, I’d love to be part of a team of people working on a better instant messaging experience, both in the technical and user sense, so if you do, please don’t hesitate to contact me.
rss-email 0.1.0
https://wiredspace.de/microblog/rss-email-0.1.0/
Mon, 15 Aug 2022 16:45:26 +0200[email protected] (Thomas Böhler)https://wiredspace.de/microblog/rss-email-0.1.0/I just released the first version of rss-email, 0.1.0, a little program I’ve written about before.
It solves my issue of not checking my newsreader often enough. Instead, I’ll get notifications of new posts via E-Mail.
It’s easily self hostable and only requires a cron entry as well as a simple config and url file to work.
Feel free to check it out! You can find the installation instructions in the README.
https://sr.ht/~witcher/rss-email/
Why did I choose SourceHut?
https://wiredspace.de/blog/why-sourcehut/
Thu, 11 Aug 2022 14:13:00 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/why-sourcehut/About 1 1/2 months ago, I published a post on my microblog about GitHub Copilot and why I decided to leave GitHub for good, switching to SourceHut completely.
What I failed to explain in that post is why I decided to go with SourceHut specifically, and not any other git forge.
SourceHut is a 100% open source git forge that uses git with E-Mail and patches, the way it worked before GitHub et al. came along. It is currently in public alpha.
I’ve always hated the Pull Request workflow that forges like GitHub, GitLab, Gitea, etc. have, though I just took those as the standard and went on with my life. Little did I know, this is not how git was made to be used.
The Linux Kernel is developed with patches over mailing lists, which I knew about, but regarded as outdated, because surely everyone would choose the better option, right? As it turns out, popular doesn’t mean good, even though this is what most seem to assume.
I was properly introduced to mailing lists when I heard about SourceHut (although I don’t remember how I caught wind of it), with a slick, minimal, fast web interface that just isn’t as sluggish as when kilobytes of JavaScript have to be downloaded in order to look at a README. It felt refreshing to be able to comfortably browse a repository without feeling like browsing a social network, including stickers, achievements, likes (in the form of stars on repositories), etc.
Needless to say, I was instantly hooked. The modern web is bloated with sites that are several megabytes in size (yes, megabytes, for websites that display text and the occasional image), which cripples your experience surfing the web tremendously. Having to wait for multiple seconds for a page to load some text is disruptive and unnecessary. So seeing sites like SourceHut (or minimal personal blogs) puts a smile on my face and fills me with the hope that the internet might not be as horrible as I thought it was1.
This alone was enough to get me interested in SourceHut. But it only got better. Easy hosting of E-Mail lists, collaboration via E-Mails and patches (which more and more started feeling like the right thing to do, especially since I learned to love E-Mail2), a sane CI system3, an issue tracker that is easy on the eyes, a simple wiki functionality for repositories and a whole lot more E-Mail. E-Mail is what drives SourceHut. You can submit patches, comment on tickets, etc., all through E-Mail. You almost don’t even have to use the web interface at all!
One of the many things that SourceHut does outstandingly well is the build system. At the time of writing, it supports 12 different operating systems and Linux distributions, which is more than I’ve seen any other CI offer. It also offers SSH access to build VMs so that it’s easier to debug an issue should the build fail.
For some, this might sound… awkward. But, I do urge you to give it all a try. E-Mail is beautiful if you give it a little time to grow on you. It’s the only major decentralised service that we still have in a world full of proprietary services with mandatory clients, so let’s not give it up.4
SourceHut is an oasis in the desert of the modern web, and I invite you to visit it sometime.
Note that SourceHut currently is free, but will require payment once it enters beta. Even today, a few services are only usable when paying for an account, but personally, I’ll gladly pay for a service like this.
So, what better time to try it out than now, while it is still free?
This doesn’t last for very long, though. ↩︎
E-Mail is a treat with a simple E-Mail etiquette. Newsletters by corporations ruined it for everyone with HTML and top-posting. ↩︎
Through YAML files with build tasks as embedded shell scripts. ↩︎
If not for SourceHut, do give E-Mail another chance with proper usage. ↩︎
GitHub Copilot
https://wiredspace.de/microblog/github_copilot/
Thu, 23 Jun 2022 16:44:40 +0200[email protected] (Thomas Böhler)https://wiredspace.de/microblog/github_copilot/Instead of trying to formulate my own opinion on GitHub Copilot, I’ll leave Drew DeVault’s blog post about it here, which is more detailed than I would have been: https://drewdevault.com/2022/06/23/Copilot-GPL-washing.html
With the launch of GitHub Copilot and the problems that it has, as outlined in the blog post, I decided to pull all my projects from my GitHub profile, leaving only forks and projects I’m a maintainer of behind. I will also remove the link to my GitHub profile from my website.
You can find all of my projects here: https://git.sr.ht/~witcher.
I’ll close this small post with an excerpt of Drew DeVault’s message to GitHub, found in his blog post:
You’ve invested in building a platform on top of which the open source revolution was built, and leveraging this platform for this move is a deep betrayal of the community’s trust.
Comment on Password Managers and Himitsu
https://wiredspace.de/microblog/password_managers_and_himitsu/
Mon, 20 Jun 2022 16:13:00 +0200[email protected] (Thomas Böhler)https://wiredspace.de/microblog/password_managers_and_himitsu/Recently, I published a blog post on password managers, saying I’m looking to switch again.
As it turns out, Drew DeVault (and contributors) has been developing a new “secret storage manager” called Himitsu, written in Hare, a new systems programming language also developed by Drew.
Himitsu seems very promising (kind of like a better pass?1) and I might give it a try, although I’ll have to see whether I’ll be able to properly use it on any non-Linux devices, like my smartphone2.
As in: more like pass than the other password managers I’m aware of. ↩︎
Currently on iOS, but desperately looking to change again as a lot of things just don’t work when you look over the Walled Garden(TM). ↩︎
Sourcehut Pages
https://wiredspace.de/microblog/sourcehut_pages/
Fri, 17 Jun 2022 20:03:19 +0200[email protected] (Thomas Böhler)https://wiredspace.de/microblog/sourcehut_pages/My website is now also available on Sourcehut pages! You can find it here: https://witcher.srht.site/
Since I’m building and publishing my website via the Sourcehut CI anyways, I figured I could just publish it to Sourcehut pages as well.
If, for some reason, my website on https://wiredspace.de/ is down, you can still look at my content elsewhere now where it will be just as up-to-date as on the main website.
Please Use E-Mail
https://wiredspace.de/microblog/please_use_email/
Thu, 09 Jun 2022 22:11:00 +0200[email protected] (Thomas Böhler)https://wiredspace.de/microblog/please_use_email/Like I mentioned in my post about communication channels, E-Mail is a low volume communication channel that I value a lot, which is why I want to remind and urge everyone to send me E-Mails instead of sending a short instant message to me.
Use communication channels the way they are intended to and make them serve a purpose.
Send me a Signal message if you want to have a chat with me, but please send me an E-Mail if you need anything else. This is my preferred method of communication.
Instead of shoving an email etiquette down the throat of my non-tech friends, I ask you to at least send a plain text E-Mail.
You can find my E-Mail address on the about page.
Password Managers
https://wiredspace.de/blog/password_managers/
Thu, 09 Jun 2022 21:57:00 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/password_managers/I’m thinking about changing password managers yet again… Why? Because I’m not happy with my current one anymore.
As of now, the password managers I’ve been using (in order) are:
KeePassXC pass Bitwarden When I used KeePass, at some point I felt like I needed to switch to a different one (although I don’t remember the specifics), which is why I used pass. Quickly, I noticed pass doesn’t fit my use case properly. I wanted something with nice browser integration and seamless synchronization, which Bitwarden offered, the password manager I’m currently using.
I trust all 3 of these solutions, but ideally I want to keep my data to myself and have the highest degree of customizability.
So far, Bitwarden has been the least appealing on paper so far, requiring my password store to be stored on their servers1 and offering next to no customizability, e.g. by having a nice custom client to be used instead of their solution, which is an Electron client of all things. A nice tool to mitigate the downside of the Electron client is rbw, an open source command line client for Bitwarden written in Rust. This makes it possible to use it like pass, which is the ultimate password manager experience for me (more on that later).
To mitigate these downsides I could use KeePass (or more specifically KeePassXC, a community maintained fork), which stores passwords locally in a file. It’s then up to the user to sync this with other devices.
This is where I stumbled when I first used it. I had to sync this password database manually, making it really clunky and, most notably, annoying to use. I feel like this is the reason I switched to pass for a while afterwards.
KeePassXC has a few nice features, like scanning the password database against HIBP or Browser Integration (which felt mediocre at best).
The ultimate password management option for me is pass, though, which stores all passwords (and accompanying information) in single text files encrypted with GPG. This makes it trivial to write scripts for use with pass, like passmenu, a script that allows easy copying of passwords through dmenu.
The downside for using pass for me is similar to KeePass: Syncing across devices. This is usually mitigated by using git in combination, but this inevitably requires me setting up a repo somewhere (either on a service like Sourcehut or on my own server), but this exposes the “database” to the internet, which makes me feel a little uneasy.
By far the most important “feature” I need is being able to access my passwords on my phone, though. All 3 options make this possible, but Bitwarden is by far the simplest option since the syncing is seamless, without the need to set up some synchronization.
Ultimately, this is the feature that pushed me to Bitwarden in the first place, but I am rethinking this decision and feel like either pass or KeePassXC is the way to go forward.
Setting up my own Bitwarden instance, e.g. with vaultwarden, is a viable solution, but mitigates the upside of having a nice and simple synchronization solution. Instead of setting up my own Bitwarden instance I might as well use any of the other 2 options listed. ↩︎
Communication channels and how to use them
https://wiredspace.de/blog/communication_channels_and_how_to_use_them/
Wed, 08 Jun 2022 23:18:10 +0200[email protected] (Thomas Böhler)https://wiredspace.de/blog/communication_channels_and_how_to_use_them/Disclaimer At the time of writing I’m still a student, have only relatively minor experience with working in companies and am in no way a trained project manager. I’m solely giving my thoughts on how the following has worked for me so far and describe what problems I’ve had.
If you’re reading this and are more qualified to talk about this or think I’m wrong (or right!) in some places, please reach out via E-Mail. You can find my contact info at the bottom of this post.
Communication Channels Since I’ve worked at a company for my mandatory internship semester at uni, I’ve started gathering actual experience (not the theoretical, uni kind) in the field of software engineering, and I specifically want to talk about the organization and communication side of things as there are a lot of problems that have not been solved for me so far, but that I’m thinking of solving with some tools and techniques. These problems currently affect me directly as well as in the past at my internship and are driving me mad, hence this post to organize my thoughts and share them with you.
We start off at the internship I’ve had in late 2021 to early 2022. The company I was working at is great and I’d wholeheartedly recommend them to acquaintances of mine, and they taught me a few things while I was there, but I ultimately left due to not being interested in the direction of the projects going on and wanting to pursue a slightly different route in my professional career.
Even though by far most of the projects went almost perfectly from as far as I could see and what colleagues told me, I had troubles with how communication worked in the project I was assigned to. And just to be clear, this isn’t an issue only true for that one company, this seems to be a perfectly normal thing to do today in a professional environment, and I frankly just don’t understand how that is.
To stop beating around the bush, I’m talking about communication in different forms: volatile/persistent and high/low volume.
You might be wondering what I specifically mean, so I’ll give you an example.
The group project from uni uses Discord to communicate, although this isn’t about how Discord might as well be the devil itself. Instead I’ll ask the question: How is Discord typically used? Well, as a volatile1, high volume communication channel.
Another question: How is E-Mail typically used? As a persistent, low volume communication channel.
In my opinion, these are the 2 main forms of communication channels relevant for group projects.
I don’t have a problem with this. Although, what I do have a problem with is if only one of these channels is being used. With my group project at uni only the former, Discord, is being used. This is absolutely horrible when announcements have to be made or persistent information needs to be stored and everyone should be able to easily reach important information.
You might be wondering why an announcement channel wouldn’t be created for low volume communication. The answer for me here is that Discord is an instant messaging service, you only use it if you want to message someone quickly. That’s it. No opening it randomly or having it open on a computer to receive messages every now and then.
With E-Mail, a low volume channel, this is different. I, and from what I can gather many many other people, have an E-Mail client open because you don’t expect to constantly be flooded with a lot of messages.
What I’m trying to say is: If you specifically use a low volume channel for communication, people using this channel will expect a low volume of messages. With a high volume medium, the opposite is the case. People will use channels for what they’re intended for, so they’re more likely to read (and memorize) messages with high priority/importance in a low volume channel.
Being a project manager Being a project manager is tough. I’m the project manager of the currently still ongoing 2 semester long project for my uni course, and it’s hard to schedule everyone’s time perfectly so deadlines are met.
What’s even more challenging is not being able to convince your team that something has a deadline, and reinforce this deadline in the project plan, especially when you can’t exert any form of pressure on them, like them losing their job when they’re unable to meet deadlines often.
This has caused my plans to become… almost worthless, really. The Gantt Chart I was supposed to create as a project manager had to be changed more times than I care to admit. And sure, projects never go as planned, but they shouldn’t ever fail this miserably either.
Now, what do I blame this on? First, not being able to exert the aforementioned pressure on team members. Second, lack of communication, which nicely ties into what I’ve been talking about in the previous section.
Communication is key. Not communicating with the project manager is tying his hands behind his back. I can’t change something that I don’t know anything about. Haven’t implemented a feature in time? Sure, let me know that so I can plan the next weeks with that in mind. If I don’t know that, I’ll plan differently, though, and give wrong promises for deadlines that will inevitably not be met.
But instead of ranting further, how could I have fixed this? What could I have possibly done differently?
Honestly, I don’t know. I’ve talked to friends and family, and no one could help me solve this issue. Snitching on team members isn’t an option since others won’t know (or don’t want to know, drama is spicy) the full story, making them think I’m just a huge dick.
Let’s look at it from a different angle: What went wrong? Well, first off, I’d like to blame the communication channel, which in this case is Discord. Discord didn’t do anything wrong (this time), it’s just a wrong channel of communication. I think that a bit of information has been missed, and if a low volume channel was used, this could have been negated. But please, if you think I’m wrong and have a different idea, feel free to lecture me about it. I’d love to know what I could have done differently.
Being a project member But not only being a project manager is hard, being a project member can be hard as well. After all, it’s not only the project manager that communicates with others, project members also communicate with each other for a number of reasons.
Here, it’s very important to have a high volume volatile communication channel. High volume should be self explanatory, it’s not a channel where announcements or similar are posted, but instead where members communicate and exchange ideas with each other. What’s interesting to you is most likely the volatile aspect. Volatility is crucial to me, as this keeps people from treating this communication channel, which is meant as a quick way to exchange ideas or problems, as a channel for documentation, or announcements for that matter.
This is something I’ve encountered in my internship semester. There was never a channel for documentation or a channel for announcements (which was used, anyway), so everything was exchanged over another high volume, persistent communication channel. Predictably, project members were asking the same questions over and over again, and information that should be documented never was.
Being an intern and seeing this situation is tough, especially if you’re unsure if your statements are correct. Out of fear of expressing a wrong statement2 I didn’t speak about this, although I should have, so not only could I have learned by having a conversation about this but the project manager as well, because the project would have been going in a better direction.
Conclusion Most people don’t think about how they’re using communication channels, and I don’t blame them. It’s boring, and why would you have to think about it that much anyway, right?
But, I encourage you to think about where you send your next message to. If it’s an announcement, maybe chose E-Mail over your company’s Signal group. If it’s a short question instead, choose IRC over sending an E-Mail.
Technically, Discord is a persistent communication channel as it keeps a history of messages, but I view it as a volatile medium since the search functionality is utterly useless and finding messages on it might as well be impossible. ↩︎
Never be afraid to express a wrong statement. Doing this is the basis for being corrected and learning. ↩︎
New Git Signing Key
https://wiredspace.de/microblog/new_git_signing_key/
Wed, 06 Apr 2022 23:11:39 +0200[email protected] (Thomas Böhler)https://wiredspace.de/microblog/new_git_signing_key/I updated my public key to have a subkey for signing git commits. You can find my updated key on the about page and here:
PGP key
Discord and Alternatives
https://wiredspace.de/microblog/discord_and_alternatives/
Fri, 25 Mar 2022 00:00:19 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/discord_and_alternatives/Recently I got myself a Nextcloud instance, and it’s been absolutely comfy. I didn’t know how useful a cloud can be until I got one that’s not one of the proprietary ones with next to no configuration available.
Nextcloud has a lot of apps. Some of the most known apps to non-Nextcloud-users are probably the Contacts and Calendar apps which also support CardDAV and CalDAV, respectively. Some lesser known to the non-initiated are apps like Talk, a WebRTC powered (video-)chat app.
When I tried out Talk for the first time, I was a bit shocked at how well it worked and how good the quality is. Thanks to WebRTC, server traffic is kept at a minimum, too.
Recently, I also tried Jitsi again, and was once more shocked at how simple but also perfectly usable it is. For some reason I’ve recently been viewing Discord as the only “viable” chat platform out there that is of high quality, but it’s nice that I’ve been reminded that this isn’t the case at all. There are ready to use solutions out there, like Jitsi, that you can easily use for good quality video chatting.
bspwm Issues
https://wiredspace.de/microblog/bspwm_issues/
Sat, 12 Mar 2022 11:48:22 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/bspwm_issues/Around September 2021 I built a new PC. I decided this would be my first main machine running Linux, and Linux only1, so I also decided I should make it look a little pretty.
So, out with dwm and it’s default statusbar and in with bspwm and Polybar. I’ve been using this setup (mirrored on my ThinkPad E4702) since then, but lately I’ve been running into the issue described here.
It’s the kind of issue that makes me want to rip my hair out, since I have no clue why it appears and how to fix it.
Since it started appearing more often I’ve thought about switching back to dwm. I’m actually kind of missing it, really. There are some minor things in bspwm I find too complicated, and I miss the simple stack layout of dwm.
What I’ll be missing in dwm is Polybar. Sure, I can just use it, but Polybar (without any patches) will be unable to show information about tags in dwm which irritates me a bit (and actually is the primary reason I decided to switch to bspwm in the first place).
The reason I want Polybar is because of statusbar icons. There is a patch for dwm but I’ve never managed to patch dwm with it.
Looks like it’s time to try again, as bspwm is (sadly) really getting on my nerves now.
I had to install Windows as well because of gaming, but Linux is still the primary way I use my Desktop. ↩︎
Back when I bought the E470 I didn’t know any better. I should have gotten a different model, and one without a GPU because what do I need a GPU in a Laptop for? ↩︎
Another Semester of Java
https://wiredspace.de/microblog/another_semester_of_java/
Fri, 11 Mar 2022 16:02:03 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/another_semester_of_java/Just when I thought the semesters full of horrible Java lectures stopped, another semester gets thrown my way with yet more lectures specifically about Java. And I bet it’s gonna be Java 8 again, none of the newer releases!
I understand that Java is still around, but it’s debatable whether
it even should still be around (as much as it is) this much time and energy needs to be put into learning specifically Java in a university There are so many languages around that are not Java. And don’t forget languages with other paradigms, like functional programming languages (like Scala, for instance, which supports OOP, FP, AND runs on the JVM!). Learning different programming languages in academics, especially considering different paradigms, would make much more sense than still trying to push Java and OOP in 2022. Alternatives exist for a reason!
I’m just hoping there won’t be another mandatory project where Java is the only language that’s allowed… At least expand it to JVM-based :)
Thankfully I’ll have some downtime from Java with my new job using Rust!
RSS-Email
https://wiredspace.de/microblog/rss-email/
Wed, 02 Mar 2022 18:42:57 +0100[email protected] (Thomas Böhler)https://wiredspace.de/microblog/rss-email/I started working on a thing that, for now, I’ve called rss-email. It’s supposed to check for new RSS posts and send them to a specified E-Mail address.
It’s in the very early development stages, and I’m currently struggling a bit to find enough time for it.
I’m using http://r-36.net/scm/zs/ as a reference.
https://sr.ht/~witcher/rss-email
Introducing my microblog, comments on my RSS feeds
https://wiredspace.de/blog/introducing_my_microblog_comments_on_my_rss_feeds/
Wed, 02 Mar 2022 17:51:14 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/introducing_my_microblog_comments_on_my_rss_feeds/Microblog I got a place for microblogging on my website now. You can find it here.
Twitter is almost deprecated for me, I guess. I just need to set up a way to simply create posts for my microblog now. We’ll see how hard that will be…
This is published on my usual blog as a way to introduce the microblogging section, in case you missed it.
There is also a new link in the menu which you can use.
You can expect short, Twitter-like posts there (hence the name “microblog”) concerning whatever I feel like talking about that isn’t enough for an actual blog post and probably has little to no research behind it. It’s mostly to put my thoughts somewhere that isn’t a random text file on my local drive so they can rot publicly on the internet instead of in a private Discord1 channel.
As far as commenting on my microblog posts goes, you should do the same as on every blog post: send me an E-Mail to one of the addresses listed at the bottom of the post.
I will try using my website instead of Twitter now, since Twitter sucks. But, no promises, maybe this is all too tedious for me (for now).
RSS feeds You might have noticed the new “This page’s RSS feed” link in the menu. This gives you the RSS link to whatever page you’re currently on, including all subpages. If you take my main pages’s RSS link, you’ll get updates on all the sections of my website, including blog posts and now also microblog posts.
If you just want either, go to the desired section and get the link of it there. That feed will then only include that section and it’s subpages.
For more information, read the documentation on hugo RSS feeds.
I’ll say it as often as I have to: Fuck Discord. Don’t use it. ↩︎
Building a corne keyboard (crkbd)
https://wiredspace.de/blog/building-a-corne-keyboard/
Wed, 12 Jan 2022 22:51:30 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/building-a-corne-keyboard/Keyboard journey Recently, I got sick of my Anne Pro 2 keyboard, which is a 60% keyboard I’ve owned since the end of 2020. Previously I was using the Logitech G810, an all around horrible keyboard (as expected from Logitech products).1
Logitech G810 The Logitech G810 was my first mechanical keyboard after a few cheap membrane ones I found lying around. It was a truly horrible introduction to mechanical keyboards, but I still enjoyed it in the beginning because it was new.
This keyboard has custom key switches that seem to horribly try to emulate the sound and feel of Cherry MX Brown’s, and due to the custom switches, have a custom mounting mechanism for the keycaps. The keycaps are mounted via 4 little tiny plastic legs that break off very easily. They break off so easy that I wasn’t able to clean the keyboard as much as I wanted to, since the keycaps cost quite a bit of money. In fact, buying the “Ctrl” key as well as a few other modifier keys would have cost me around 20€, and it just so happened that I had to replace exactly that key.
Alas, I never did, so I was stuck with a keyboard that had a Ctrl key missing for about 2 years, if not longer.
The feel of the keyboard was also very… off. It didn’t feel nice, it didn’t sound nice, but luckily I only noticed how bad it actually was after switching to good switches.
Anne Pro 2 The Anne Pro 2 is the second mechanical keyboard I’ve owned, and the first one with “proper” switches. After talking to a few friends about what type of switch I should get, preferences and everything considered, I decided to buy it with Gateron Brown’s, and I don’t regret my decision. All in all, the Anne Pro 2 is an okay keyboard, although it has a few big shortcomings I’ll touch on in a bit.
It was also my first 60% keyboard, having owned only 100% keyboards before. I was afraid that this change might be a little too much, seeing as literally 40% of my keyboard would be gone, but it was definitely the right choice. Gaining the desk space from removing this many keys is welcome and improved my position while gaming in particular, as my left hand now doesn’t have to be so far away from my body in order to reach the WASD keys.
Thanks to the 60% keyboard I was also introduced to layers, even if they are fairly primitive on the Anne Pro 2. This taught me that you don’t have to have all your keys that you might use occasionally on your keyboard at all times, but that it’s perfectly fine to put these on a different layer so that they’re still there should you need them.
Now for the problems this keyboard has, and I’ll start with the biggest one: Double Inputs. This keyboard is notorious for having double inputs, and they drove me crazy. Not as crazy as it did a friend2, but they were still annoying. I don’t think I would mind them as much now since I’m not playing osu! anymore, but they would still be annoying.
The other big problem, although not impacting me a lot as I didn’t use it much, is the wireless feature. The double inputs at least tripled in frequency and some inputs weren’t registered at all.
Finally I decided I’ve had enough with the double inputs and I decided to look for a new keyboard.
Corne keyboard (crkbd) After a while of looking at other pre-built keyboards, none of which I really liked, I decided to buy a split keyboard instead of a single unit. At first I had the ZSA Moonlander in sight, but I ultimately decided it was not what I wanted.
This got me to finally look into building a keyboard myself, something I tried to do before I got my Anne Pro 2 but gave up on since I wasn’t able to find something that I liked and wasn’t $100 dollars in shipping (literally). Stumbling upon the Corne keyboard, abbreviated as crkbd, I instantly knew that this is the keyboard I wanted.
The crkbd is actually just an open-source PCB, found here. After ordering this, either from a reputable keyboard site or having it produced by a PCB manufacturer, you also need:
MX hotswap sockets key switches key caps microcontrollers Optionally, you can also get backlight and underglow LEDs3, as well as LCD panels.
There are a few configurations you can have, but ultimately the LEDs are, as far as I know, not interchangeable with other models, and neither are the LCDs.
My configuration ended up looking like this:
Black MX CRKBD PCB Kailh Hotswap Sockets SK6812 Mini-E Switch LEDs SK6812 Mini Underglow LEDs4 Elite-C Microcontrollers SSD1306 OLED Display Corne Cherry Acrylic Plate Case (Clear) ErgoDox DSA Blank Keycaps Gateron Browns Assembled, the keyboard looks like this: After assembling it in November of 2021 and using it for a few months, I can comfortably say this is by far the best keyboard I’ve owned. The split nature of the keyboard was a little difficult to adjust to for me, taking about 2 weeks to get fully used to it, but it is very easy on the shoulders and on the wrists.
The PCB is designed to not be row staggered, like the normal keyboards we use, but instead column staggered, keeping the keys in straight columns because your fingers move up and down, not sideways.
Not that it really matters, but I increased in typing speed by 20 WPM5, increasing my 120 WPM to 140 WPM just by switching to the corne.
One of the big changes this keyboard has introduced is the insanely small keyboard size. You might have noticed that, compared to the 60% Anne Pro 2, the number row is now also gone. A few other keys have also vanished, which means you’ll be force to use layers to access those keys.
As with switching to the Anne Pro 2, I was scared of this being too much, but again, it was the right decision. It’s designed to have every key on the keyboard be in reach of the fingers when on the home row so they only ever have to move 1 key over.
Another new addition is the thumb cluster. Even though the corne has a small thumb cluster of only 3 keys per half, it’s incredibly useful and feels intuitive after a very short time. You can put often used keys here, like Space and Shift, but also layer keys for easy access to another layer.
Closing If you are thinking about buying a new keyboard, think about getting a split, column staggered keyboard, especially if you type on a keyboard a lot. We are using an old and outdated keyboard design, stuck in the times of typewriters being the only keyboards we know. They couldn’t be column staggered as the hammers would be in front of each other, blocking each other from hitting the paper, and they couldn’t be split, ruining the hands of every typist using them.
As far as my journey in keyboards goes, I’ll probably look into learning the Colemak DH keyboard layout next, further leaving the deprecated ways of typewriters behind.
I’ve owned (and still own some) Logitech products in the past, to be precise: G810, G502, Z313, Z625, and G432. I passionately hate all of them. ↩︎
The cool lad from https://rac-city.netlify.app/ ↩︎
NOTE: you’ll have to buy and solder both the backlight and underglow LEDs in order to have RGB work. Buying just the backlight LEDs, for instance, will not work as the backlight and underglow LEDs are in series, meaning if one type is missing the circuit won’t be complete. ↩︎
At the time of writing the underglow LEDs are not soldered to the PCB because I messed up the order, so now I don’t have any LEDs working. This is fine, though, as I had guessed incorrectly that I would be looking more at the keys. Now that there are no labels on the keycaps anyways, the LEDs don’t matter anymore. ↩︎
“WPM” stands for “Words Per Minute” ↩︎
Server cleanup, sourcehut, and general change
https://wiredspace.de/blog/cleanup-sourcehut-general/
Fri, 31 Dec 2021 00:21:31 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/cleanup-sourcehut-general/Previous state of the server I’ll be honest: my server was a mess. I had quite a few things running, like a PrivateBin instance, ZNC for IRC, searx, discord_covid19, a lazy instance of a mail server, and some other things. For instance, I actually had no idea how ZNC was running: as a systemd service, in the background, docker, etc.?
So, eventually, I thought to myself: You have to clean this mess up sometime, preferably soon.
And so I did. About a week ago I took down PrivateBin (no one was using it and it didn’t seem useful to me anymore), ZNC (I’m barely ever on IRC anyways), and the mail server (big tech were blocking me for no reason anyways), among other things.
Honestly, it feels so refreshing. It started feeling cramped on my server as there was no structure, no rules about how to set anything up, not even something as simple as “everything will run as a systemd service”, and no directories for installation or users for instances were set up either. It was all just a mess of “I want this, let’s do it quickly so it’s running”, thinking I’ll clean up later. Alas, I never did.
With the server now feeling fresh again, essentially the only thing left running being my website, I’ll look into hosting different things again.
One of those things is soju, an IRC bouncer that’s supposed to be “user-friendly”, and that’s the impression I get, too. I’ll have to evaluate whether this is something I need, seeing as I barely ever use IRC (as stated above), and since soju is also running as a service on sourcehut for me to use instead.
Moving to sourcehut Speaking of sourcehut, I’ve been moving my git repositories from the Microsoft owned GitHub to there, mostly because I still don’t feel like Microsoft is doing anything for the good of… anyone, but also just because it’s cool. Sourcehut feels fresh (maybe because it’s still in Alpha), minimalistic and, most importantly, avoids JavaScript. It feels fast and responsive, unlike GitHub.
Even better, sourcehut is completely open-source. This includes everything on the side, like the git instance as well as things like the build system.
But, one of the things I like most about sourcehut’s git philosophy is the choice by Drew DeVault to focus on E-Mail as a system for collaboration: There’s no need for accounts, you just submit a patch to an E-Mail address and you’ve contributed to your favourite project. No context switching for a pull request, no JavaScript to slow you down, just the good old E-Mail client of your choice and a little bit of what most people nowadays would almost certainly call git magic (read: patches in mbox format sent via E-Mail and applied via git am).
I like sourcehut so much that, even though I am a student and sourcehut is still in the free Alpha stage where no payment is required, I decided to pay for it. This is the kind of stuff we need to support.
This also opened up the possibility to use their build service. So I bit the bullet and decided to set up build instructions for my website, found here.
Never have I used automated builds for a project before and I gotta say, I’m very happy with the outcome. I just need to push my changes to the git repository, wait a few seconds, and my changes can be seen on my website. Just as I will with this post: After I’m done writing, I’ll simply push this to my repository and wait for the magic to happen.
Changes to the website Something else I’m planning on doing is changing the website a little… again. I’m not satisfied with the wiki section at all. This would need a rework and right now I’m planning to merge blog posts and wiki entries into one category as my wiki entries could qualify as blog posts. To still have the small wiki entries be their own thing I’ll introduce tags for all my posts and only list actual blog entries in my blog, with the ability to find wiki entries with tags. Or something like that… We’ll see.
As I’m in the process of changing my whole E-Mail setup to have a wiredspace.de domain for the ability to easily switch providers in the future without much fuss expect to find a new E-Mail address to contact me at, along with a new PGP key (duh).
Shoutout to a good friend You know who you are. To anyone else, feel free to take a peek at a friend’s posts. You can find his blog at https://rac-city.netlify.app/. He has been a huge motivation for a lot of stuff for me, including blog posts like this one. Thanks man, here’s to another year of being nerds.
Arch and thoughts on distro hopping
https://wiredspace.de/blog/arch_and_distro_hopping/
Sat, 02 Oct 2021 12:30:00 +0000[email protected] (Thomas Böhler)https://wiredspace.de/blog/arch_and_distro_hopping/“Arch is a hard distro to {install,maintain,use}” Elitist and meme culture There has been a sentence on the internet for a long time now that I very much hate to hear or see. It’s the neckbeard’s introduction to a Linux topic and the nerdy zoomer’s favourite meme.
If you haven’t figured it out from that description, it’s the computer science nerd’s favourite meme expression: “I use arch btw”. It is a sentence that, if used unironically, is supposed to convey the superior intellect and skill of the person running that specific Linux distro over other Linux users, and, if used ironically, is supposed to be the latest hilarious meme in the computer science community that has been around for decades.
Why is this sentence so wide spread, known by everyone that has any interest in Linux and used the internet for 5 minutes? Because there exists this assumption that Arch Linux must be a painstakingly difficult distro to install, use and maintain.
But is that correct?
It’s true that Arch can be intimidating to install, with all the command line stuff going on and no prior knowledge of the process. But is it really that bad?
First time? Me too I remember my first time installing Arch like it was yesterday: At first I was sweating profusely, scared of typing in rm -rf / by accident while I had the chroot active, but after a few minutes I was breathing heavily while clacking on my keyboard at 200wpm, just like the Linux nerd I always wanted to be. It turned out to not be such a difficult thing after all, having read the installation guide on the wonderful Arch Wiki. It’s honestly a fairly short read, and if you have any idea about anything on Linux you won’t be having that many issues. If you do run into any issues, the private search engine of your choice, or even the Arch Wiki itself, will be your best friend. Linux isn’t the undocumented, confusing mess of an operating system that Windows users make it out to be, not by any stretch. If you need help on something, you will find information on that (and please do make sure to look up on a topic instead of creating a new Reddit post right away, the information is most likely out there), especially when it’s Arch related.
After installing Arch for the first time, I decided to create my own “guide” (which as of now is a bit outdated), which is more like a series of commands that I need to run in order to install Arch. Now every install is just 10 minutes of thinking about the commands I need to type in and rebooting.
Usage After the “horrible” installing experience, which is actually quite beautiful since you can configure the system the way you want and are not limited by the graphical installer that other distros come with, it’s time for actually using the distro. No, you won’t have a graphical interface to use at first, but that’s quickly fixed by pulling and installing the window manager/desktop environment of your choice with the amazing pacman and rebooting.
To be completely honest, I don’t understand why so many people call Arch a “difficult distro to use”. It’s used like every other Linux distro that exists on the internet after you first installed it. Sure, the way you install packages changes if you were used to a graphical front end of a package manager, but I’m sure even those exist for pacman if you prefer a GUI over the terminal. After installing something you’ll just use it like you would on Ubuntu, Mint, PopOS, Fedora, or any other distro. The hard part is if you want to run something like osu! that isn’t natively available on Linux and needs a lot of configuring to ensure that your drivers are working and the audio latency isn’t unbearable. Everything else is just as easy as you know it.
The “horrible” experience of maintaining Arch Now, what about maintaining Arch? Surely that must hold some weight… right?
No, not if you ask me. I’ve been running Arch on and off for more than 5 years and not once did I actually have to fix something that was hard to do. If a package in your repo breaks anything then Arch usually tells you on their front page, and you can even automate the process of seeing if anything is up by installing informant, which blocks the installation of packages if there is news about any packages on the Arch website. The fixes are then displayed right in your command line from where you were updating your system from anyways.
“But what about the AUR?”, I hear you ask. I’m not aware of a tool like informant that exists for the AUR and yes, sometimes things in the AUR mess up, but this is the same as with any other distro. Think of it like this: The AUR is a collection of packages that you would have to install manually if you weren’t on Arch. An AUR helper makes this process even easier by allowing you to use the AUR like you use the offical repositories with pacman. If something screws up you either look at the GitHub issues for that project or the AUR page. If there’s nothing there, that sucks, but you’re now on your own, just like you would be on every other distro.
Conclusion My conclusion is: No, Arch is not hard to maintain. It’s debatable, but I would even go as far as to say it’s easier than on other distros due to the help on the Arch website for packages in the repo that might break something, the Arch wiki that has a lot of issues on packages/programs listed on their respective wiki page, and the AUR pages of packages that people will post issues on and the maintainer of the package can post solutions to.
Distro hopping: An epidemic in the Linux bubble The time has come for the second topic of this post: distro hopping. For a while now there has been a culture around constantly choosing different distros and trying them out, called “distro hopping”. People use a distribution of Linux for a while, think they don’t like it and jump to another one, daily driving that one for some time, until changing again, and so the cycle continues.
I have never understood the point of this and always ask myself: “What benefit can you gain from distro hopping?”, but I always end up with the conclusion that there is no benefit in doing this. Here’s why.
What’s the difference between distros? There are the fundamental differences: The package manager and the init system. If you are not happy with either then yes, it makes sense to hop to another distro. But there are only so many combinations of these 2 fundamental things to try, and nowadays the init system, being systemd, mostly stays the same across distros. What changes is the package manager, but there is a finite number of these available and at one point there’s no way you haven’t tried every single one.
The other difference a distro can make is what packages are installed by default. I get leaving a distro because it has packages that you don’t want, but at some point you’re bound to wonder if you shouldn’t just install a minimal distro like Arch to just choose what packages you want to have installed when you first use it. Don’t like a certain desktop environment? That’s fine, just don’t install it. Want a different one than the one you are currently using? Sure, just install it and you’re fine. No need to go distro hopping because of a desktop environment.
Lastly, it’s also just very annoying to hop distros. This is obviously a subjective point I’m making here. I can only imagine going from Ubuntu to Arch to Fedora, the package manager changing each time and having to consult different wiki’s (the most horrible thing, as I’ve found that the Arch wiki is essentially the only usable one, and it’s really good, too).
If there are points I missed, please enlighten me on what those are so I can finally understand this culture of distro hopping. As of now the reasoning behind this is beyond me and I think it’s a pure waste of your time.
Writing a Discord bot in Go
https://wiredspace.de/blog/discord_go_bot/
Tue, 08 Jun 2021 00:00:00 +0000[email protected] (Thomas Böhler)https://wiredspace.de/blog/discord_go_bot/ This post was written in 2021. I don't condone the use of Discord for any purpose whatsoever. Use open platforms that don't censor their user base at will.
The article remains online for archival purposes but should not be followed for ethical reasons. Free and open source alternatives that respect the user include, but are not limited to: Matrix Signal IRC XMPP About a month ago I decided to get into Go a bit. It’s always kind of been an interesting programming language since it’s modern, simple and has quite powerful multi-threading capabilities, most of which I have yet to use. I was asked if I could program a Discord bot that would print the weekly Covid-19 incidence numbers in Germany and I thought that’s a great idea, so here we are.
You can find the source code for this bot here. I thought I’d share it since I put in a bit of work recently.
Prerequisites For this Discord bot I’ve used the Discord library discordgo, it’s “extension” dgc to structure my code better and, most important of all, the REST API I used is rki-covid-api.
Since the API can be self-hosted easily with Docker, I decided to do exactly that. You can find this over at https://rkiapi.wiredspace.de/.
Writing the code The API The API is fairly easy to use. So far the only thing I’ve been implementing is the “districts” endpoint, which is well structured.
The response is structured in data. Each state has it’s own AGS (“Allgemeiner Gemeinde Schlüssel”, essentially an ID for each district), which is how it’s identified. Besides that, the districts contain information about their name, population, weekIncidence, deaths, etc.
The one I’ll be focusing on is the weekIncidence field since this was what I originally built my bot around.
I ran into a bit of trouble deserializing the JSON you got from the API since I wasn’t familiar with the Go way of doing this. The problem I had was that the fields of the data response aren’t static; they are the AGS returned by the API.
As it turns out this is easily handled. I declared the reponse I get as the following:
type DistrictResponseData struct { Data map[string]DistrictResponse `json:"data"` Meta []MetaResponse `json:"meta"` } The Data field contains the districts which are identified by the AGS. Simply mapping string to the struct for the district did the job.
Deserializing the object turned out to be a bit weird, but it’s fine overall:
var drd DistrictResponseData // initialize a (hopefully) big enough map // api contains about 410 districts drd.Data = make(map[string]DistrictResponse, 410) I initialize a struct for the response and can’t call json.Unmarshal directly on that struct, I need to call it on the Data field of it. This is the only I’ve managed to get it working, maybe you can find another one that might be more elegant. This works though so I won’t complain.
After this I just query the API for a reponse and call err = json.Unmarshal(responseData, &drd) on the reponse body. This fills the drd variable with all the district data.
That’s all you should need to know about the API.
The Discord libraries discordgo The discordgo library is fairly easy to use. As with any other go package you can find the documentation on https://pkg.go.dev/github.com/bwmarrin/discordgo.
To use this library you create a discordgo.Session that will handle all the interaction with the Discord servers.
For basic usage on this library I recommend having a look at the examples from their GitHub repo. They teach the basics well enough for use with the other Discord library I’m using.
dgc dgc is an extension of the discordgo library. It uses that one to offer more functionality and better usability, as I’ll show you in this section.
As usual, you can find the documentation on https://pkg.go.dev/github.com/Lukaesebrot/dgc.
With discordgo you need to register a handler and handle the incoming messages yourself. This includes argument parsing.
Obviously this gets very boring really quickly, so I started using dgc. dgc, which lets you define command handlers that get called for specific commands for which you can even set up aliases.
For basic usage, again, I recommend you to look at their examples. The basic.go example should be all you need for now.
Initializing this library is done via the dgc.Create() function. It takes a dgc.Router as an argument, which is initialized with the Prefixes, among other things.
Registering commands to this router is done via router.RegisterCmd(), which takes a dgc.Command as an argument. With a Command you can specify Name, Description, Usage, a Handler and more. The Handler will be a function with a Signature of func(*Ctx), meaning that it takes a context through which you will be able to send messages.
dgc provides a default help handler which you can register via router.RegisterDefaultHelpCommand(s, nil), where s is the discordgo.Session.
This help handler needs the reaction intent since the user will be able to flip through “pages” of the helper on discord, which is done via reactions.
The intents I assigned the bot are the following:
discord.Identify.Intents = discordgo.IntentsGuildMessages | discordgo.IntentsGuildMessageReactions This let’s you reply to incoming messages and react to reactions.
Sending messages is really easy. When one of the command handlers is being called they will have the Ctx available as a parameter. This Ctx presents you with 3 methods:
RespondText(string) RespondEmbed(*discordgo.MessageEmbed) RespondEmbedText(string, *discordgo.MessageEmbed) These are fairly self-explanatory by themselves.
Creating an embedded message is pretty simple, too. You just create a discordgo.MessageEmbed struct and fill out its members. Not all members have to be assigned something:
embed := discordgo.MessageEmbed{ Title: "Removed districts", Timestamp: time.Now().Format(time.RFC3339), Description: strings.Join(names, ", "), } This is an excerpt from my code. It defines a Title, a Timestamp and a Description for the embedded message. Note that the Timestamp needs to be in RFC3339 format. If that’s not the case you will get an error when sending the embed.
Sending it is as easy as doing ctx.RespondEmbed(&embed).
Sending messages can throw an error so you should catch that and log it somewhere.
Blog pagination support in blogc
https://wiredspace.de/blog/blogc_pagination/
Thu, 18 Feb 2021 00:00:00 +0000[email protected] (Thomas Böhler)https://wiredspace.de/blog/blogc_pagination/After taking quite a break on working on my website I decided to finally add pagination support for the blog on my website, which you are currently reading.
Since I set up my website on blogc it just made sense to keep using that and even though I had quite a hard time figuring stuff out, again, as there is only documentation to work with, I pulled it off.
As it is now, the blog shows 1 blog entry per site, which I might change in the future once I modify the CSS a bit to make distinguishing between posts easier. Currently the only way to distinguish between them is via the newly added header for every post, which is in orange, being the standard link color.
From anywhere on the site you can get to the blog by clicking the blog entry at the top of my page as usual, which will direct you to the newest entry of the blog. At the bottom of each blog site you can find some navigation, allowing you to navigate to the next and previous blog.
At the top of each entry you can see the title of the entry, which is a permanent link to the entry it describes. This is needed since you won’t be able to point to the same blog entry once new ones are uploaded or old ones are deleted. With the permanent link you can always access the blog entry as long as it’s still on the server.
Further things to implement are:
Index of the blog showing each blog entry Jumping to the first/last entry RSS feed showing the date of publication for a blog entry adding some sort of box to each entry Setting up mbsync
https://wiredspace.de/blog/mbsync/
Tue, 22 Dec 2020 22:24:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/mbsync/Introduction to mbsync mbsync is an alternative to offlineimap. I decided to recently switch to mbsync because offlineimap’s development stopped and I started having problems with SSL/TLS that I wasn’t about to fix.
Setting up mbsync is easy and tedious, but I’ll show you how my setup looks like so you have a simpler time than me.
Setting up mbsync Configuration file mbsync’s configuration file is located at ~/.mbsyncrc, but you can specify a different location via the -c flag when calling mbsync.
The configuration file contains every account to be synced. As far as I know there is no built-in way to have different files for different accounts.
I will share my configuration file here:
IMAPAccount {account_name} Host {servers_hostname} User {username} PassCmd "gpg --no-tty --for-your-eyes-only -dq {location_of_encrypted_password}" SSLVersion TLSv1.2 IMAPStore {account_name}-remote Account {account_name} MaildirStore {account_name}-local Path ~/mail/{account_name}/ Inbox ~/mail/{account_name}/INBOX SubFolders Verbatim Channel {account_name} Far :{account_name}-remote: Near :{account_name}-local: Patterns * Create Both Expunge Both SyncState * Replace {account_name} with an identifier of your choice. You should be able to figure out what hostname and username your E-Mail provider requires yourself.
The PassCmd option in this file makes it possible to not provide your password in clear text in the configuration file but get the password via a command; I get it via gpg. For more information how to set this up have a look at my wiki entry for offlineimap I wrote a while ago, specifically under the section gpg encrypted password file.
In the MaildirStore section you configure mbsync where to put the E-Mails it downloads locally. I decided to put it under ~/mail/{account_name}, but feel free to put it somewhere else.
Specifying SubFolders Verbatim tells mbsync about how the paths to your mail should look, but you shouldn’t need to change this as this works, at least with neomutt.
The Channel section binds the remote and local stores together. mbsync used to use Master and Slave instead of Far and Near, but this is deprecated and it will notify you of this, should you use the old naming.
For more information on the configuration file make sure to read up about it on the man page. Providing a structure of the config file should help with confusion.
Systemd timer I decided to set up automatic fetching of mails via a systemd timer that calls a unit. This is the part that gave me a bit of problems since the timer just wouldn’t run, but I got it figured out now.
I’ll share my unit and timer file here.
It’s probably best to use user unit and timer files, so this is what I did. These can be found in ~/.config/systemd/user/.
Unit [Unit] Description=Refresh emails via mbsync AssertPathExists=%h/.config/neomutt AssertPathExists=%h/mail/ AssertPathExists=%h/.mbsyncrc Wants=mbsync.timer [Service] Environment=XAUTHORITY=%h/.Xauthority Environment=DISPLAY=:0 Type=oneshot ExecStart=/usr/bin/mbsync -a TimeoutSec=120 [Install] WantedBy=default.target This unit file sets a description and sees that the required files/folders exist, namely neomutt’s config folder, the folder that stores mail and the mbsync configuration file.
In the Service section I define environment variables needed for gpg to be able to display the pinentry dialogue to decrypt the passwords for the IMAP accounts. Without these variables gpg is not possible to show the pinentry dialogues and will silently fail. Pay attention that you use a GUI pinentry, as the ones being displayed on the terminal obviously won’t show up.
Setting the unit’s type as oneshot means the unit will be blocked until the command finished executing. It also means systemd will report the unit as “activating” when it is running.
The unit starts mbsync with the -a flag, meaning it should sync all accounts listed in the config file. If you want a different behaviour list the accounts you want to sync individually using the same name you used in the configuration file, {account_name}.
Now, the timeout is important. For some reason I don’t understand even now mbsync would freeze and never finish, which is the reason the timer failed to activate it again. I set the timeout to 120 seconds, or 2 minutes, because, to me and for my accounts, this seems like a reasonable time to fetch mail in. You might need to change this, also depending on how often the timer will call the unit.
Timer [Unit] Description=Run mbsync to refresh mails every 5 minutes Requires=mbsync.service [Timer] OnStartupSec=1m OnUnitActiveSec=5m Unit=mbsync.service [Install] WantedBy=timers.target The timer file is similar to the unit file but has a Timer instead of a Service section.
In the Timer section I specified that the timer should call the unit file a minute after the user logged on, and then every 5 minutes. I give myself the minute to let the computer boot, get a proper network connection etc.
The Unit variable tells the systemd it wants to call the specified unit, which should be the name of the unit you saved.
The Install section defines only the WantedBy variable with a value of timer.target, specifying, again, that this is a timer.
After setting up the unit and timer you should be able to start and enable the timer via systemctl --user daemon-reload, to reload the changes on disk (you will need to call this every time you change your unit and timer), and systemctl --user enable --now mbsync.timer.
You can verify the timer is running by calling systemctl --user list-timers where your timer should be listed, along with information on when it will next file, when it last fired, etc.
Static Site Generation with blogc
https://wiredspace.de/blog/blogc/
Mon, 21 Dec 2020 00:00:00 +0000[email protected] (Thomas Böhler)https://wiredspace.de/blog/blogc/Introduction Up until now I’ve been working on my sites in pure HTML; the only tool I used had been ssg to convert the little “wiki” I put op on GitHub to HTML and use it on my website.
Creating a HTML file every time I wanted to post something on my website is not a viable alternative, though, which is why I was looking for an option to make my life easier. I decided to ask people on the webring thread on lainchan for advice on how to set up a proper workflow for web development, as I am completely new to this.
The nice guy from concealed.world helped me get some ideas, which ultimately led me to blogc, the static site generator which I’m going to talk about here.
Why I chose blogc Blogc is simple yet powerful. On their website they state it should be used with a tool like make. It’s not supposed to be a blogging engine but a compiler.
I like this approach, as I am a fan of C (programming language). To be perfectly honest, I don’t have all that much knowledge about make, but I want to learn, and this is part of the reason I’m making this website.
Starting out To start, I downloaded the test repo they have up on GitHub (https://github.com/blogc/blogc-example) so that I don’t have to figure out everything by myself, as this could also be seen as a piece of documentation.
From there, I started changing my own template.html I had to function as a blogc template. If you want to follow this a bit, have a look at my repo and compare it with the example repo blogc provides.
Template So far the template is really simple. As I said, I took the template.html I had and changed it to resemble a blogc template. The title is now set per page, copied from the template from the example repository. The links in the navbar are changed so it’s easier to change them, should I ever get another domain.
The main section is now filled with template things. I won’t go into detail, because the man page blogc-template goes into enough detail. It’s essentially just pasting the content of the source file in between the main tags.
Makefile Writing the Makefile was a difficult thing for me. Every time I have to write one I need to skim through the documentation again to get stuff done.
Trying to imagine how my site should function with software I’ve never used before is also hard to do.
As I said before, you can find everything, including the Makefile, on my GitHub page.
I also took some inspiration for my Makefile from the example repo. A lot of the variables are taken straight from their Makefile, with values changed, obviously. I removed everything I didn’t need, which included some variables and rules, and I was left with only one rule which I changed to my liking and used as a template for the rest. If you’re not familiar with Makefiles, they’re fairly easy to understand when you figured out the syntax, which is admittedly a bit weird.
For now I list the files I use individually, so I need to add each new blog and wiki entry I create.to the list manually. Depending on where I use the files I pre- and append them with the directory and file ending respectively.
Blogc has 2 modes for compiling:
The standard entry mode Listing mode In entry mode, your entry and variables defined in the source file are available as usual. In listing mode blogc handles every source file taken as an argument. In your template, listing blocks are being executed for every source file to be compiled into one output file.
I compiled my homepage, about, the privacy policy, etc. in entry mode and, for now, will be compiling my blog posts in list mode. The reasoning behind this is that I’m trying to figure out how to generate a listing without listing the content, but will be looking into this another time.
That’s about everything I can talk about. The content as well as the design on this site is largely the same as before, with just a few changes that are probably not even noticeable.
Setting up msmtp
https://wiredspace.de/blog/msmtp/
Thu, 21 May 2020 15:40:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/msmtp/msmtp msmtp is a commandline smtp client that reads the message body from stdin.
configuration You need to first install msmtp with the package manager of your choice.
After installing, create a confi file. msmtp looks for those in $XDG_CONFIG_HOME/msmtp. The config file simply needs to be called config.
Here’s a sample configuration file which you can edit:
defaults auth on tls on tls_trust_file /etc/ssl/certs/ca-certificates.crt logfile /home/{user}/.config/msmtp/msmtp.log account {account1} host {smtp hostname} port 465 from {from field} user {smtp username} passwordeval "gpg --quiet --for-your-eyes-only --no-tty --decrypt $XDG_CONFIG_HOME/msmtp/.msmtp-{account1}.gpg" # only if your smtp server doesnt support STARTTLS #tls_starttls off # vim:filetype=msmtp The last line exists to make vim highlight the configuration file with the right syntax. Feel free to remove it if you want.
Make sure you replace all your occurances of:
{user} - the username you are logged in to on your computer {account1} - a string you have to reference later to select that profile {smtp hostname} - the hostname of your smtp server {smtp username} - the username you log in with on the given smtp server usage Using msmtp is very simple. Supply the body of the message via stdin, set a subject via the -s flag, set the account to use via the -a flag and send it to the email supplied at the very end.
My Arch Linux Setup
https://wiredspace.de/blog/arch-config/
Tue, 19 May 2020 09:00:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/arch-config/This describes how to setup Arch Linux similar to my setup. This is rarely up-to-date, so be mindful when you type in your commands in the commandline, and be sure to check the Arch Linux Installation guide or the Arch Wiki in general, if you’re unsure about something.
Mirror List Location: /etc/pacman.d/mirrorlist
Select the mirror(s) you want to use for the package manager pacman. Delete every other entry or mark it as a comment to ensure that pacman is using the right mirror.
You can also install vim (if not already installed on the live disc) with pacman by typing pacman -Sy vim for more convenient text editing.
UEFI or BIOS? To check if your system is using EFI, run the command: ls /sys/firmware/efi
If the directory exists and contains files then your system is using EFI. If that is not the case you are probably using BIOS and this cheatsheet will probably not work to install Arch Linux on your machine.
Partitioning Attention: This process will wipe your entire harddrive!
Listing current drives and partitions To check which drives are installed on the system and what partitions there are these commands might be helpful:
lsblk cat /proc/partitions gdisk -l Partitioning your drives If your system is using EFI use gdisk for better compatibility
Make sure to memorize the partition numbers and which partition is for what purpose.
To partition your drives (your standard harddrive should be /dev/sda), utilize the command gdisk.
gdisk /dev/sdx where x is the driver letter (e.g. “a”)
Clear the parition table To clear the partition table type o in gdisk
Command (? for help): o
Create a new partition Create a new partition by typing n
Command (? for help): n
You will be asked, what number your partition should have. This will be displayed as /dev/sdaxY where x is the driver letter and Y is the partition number.
After this you will be asked about the First Sector and the Last sector of the disk. For the First Sector you choose the default value.
For the Last Sector you can either specify a number (e.g. 4196) or how big the sector/partition should be. This is done by typing a + followed by the size of the sector. For gdisk to know if you are using megabytes, gigabytes or something else, you need to specify this by a letter at the end of the line (e.g. M for Megabytes or G for Gigabytes).
gdisk now asks for the type of your partition. This is where you want to be specific.
If you do not have a EFI Partition (e.g. when you cleared your partition table) you have to create one. If your drive already has an EFI Partition then you can skip this step.
The EFI Partition should be 512 Megabytes big and has a hexcode of EF00.
You need at least one other partition for the filesystem to go.
Create a new partition by typing n in the prompt Select the size of your partition (depending on if you want to add a seperate /home or swap partition) The standard linux filesystem has a hexcode of 8300 If you want to add a seperate /home partition to your drive, do the same as with the / filesystem.
To create a swap partition on your drive do the following:
Create a new partition by typing n in the prompt Select the size of your partition (a swap partition should be at least as big as your RAM, the recommended size is 2x of your RAM) The hexcode of swap is 8200 At last write the changes to the disk with w.
Make sure to check the drive with gdisk -l /dev/sdx!
Formatting the partitions The mkfs command allows you to format a partition to a desired filesystem.
The EFI Partition uses vfat. To format it with vfat use mkfs.vfat /dev/sdxY.
The linux filesystem uses ext4, so your \ and \home partitions should be formatted with mkfs.ext4 /dev/sdxY.
To make your swap Partition one, use the mkswap command.
To enable your swap Partition, use the swapon /dev/sdxY command. To disable a swap Partition, use the swapoff /dev/sdxY command.
Installing Arch Linux Mounting To mount a partition, utilize the mount /dev/sdxY /mount/point command.
To unmount a partition, utilize the umount /dev/sdxY command.
/ For convenience we mount the partition where the root filesystem should go to /mnt.
Make sure you select the right partition!
mount /dev/sdxY /mnt
/home If you have a seperate partition for your /home directorys, create a directory in the already mounted root filesystem. This directory NEEDS to be called home.
mkdir /mn/home
After the directory is created, mount the partition where your /home directorys should go to the newly created folder.
mount /dev/sdxY /mnt/home
/boot The EFI Partition goes here.
First create a directory in the mounted root filesystem called boot to mount the EFI Partition.
mkdir /mnt/boot
When the directory is created, mount the EFI Partition to the newly created folder.
mount /dev/sdxY /mnt/boot
pacstrap To install the filesystem for Arch Linux, we use a tool called pacstrap which is included in the live disc.
Assuming you mounted the partition where your root filesystem should go is /mnt, do the following:
pacstrap /mnt
This will install all necessary files to that partition.
systemd-bootd bootctl is part of the systemd suite.
To configure bootctl we have to enter the newly installed Arch Linux system. This is done via arch-chroot.
arch-chroot /mnt
When you are in your Arch system, use the command bootctl install to install bootctl. It should detect everything it needs by itself.
When this is done, check your /boot directory for the necessary files.
If these are present, move on to the next step.
Now you have configure the loader.conf found in /boot/loader.
Note: The Arch Linux installation you have entered does not contain vim as an editor. You have to manually download it using pacman -S vim.
First, empty loader.conf. For now you can copy this configuration file:
default arch timeout 4 editor 0 Next you need to configure arch.conf located in /boot/loader/entries.
Again, for now copy this configuration file:
title Arch Linux linux /vmlinuz-linux initrd /intel-ucode.img initrd /initramfs-linux.img options root=PARTUUID="YOUR DRIVE UUID" rw The first initrd line is optional but should be used when using an intel CPU. This will ensure your CPU will get updated before starting the kernel.
You can install the package by issuing the command sudo pacman -S intel-ucode.
To figure out what PARTUUID your drive has, type in this command:
blkid -s PARTUUID -o value /dev/sdxY
Configuring Arch Linux fstab fstab allows your PC to mount partitions on boot.
Generate a fstab file (without arch-chroot into the system):
genfstab -U /mnt >> /mnt/etc/fstab
For fstab configuration, see fstab.
Time zone Set the time zone:
ln -sf /usr/share/zoneinfo/Region/City /etc/localtime
Run hwclock to generate /etc/adjtime:
hwclock --systohc
This command assumes the hardware clock to be set to UTC.
Locale Uncomment en_US.UTF-8 UTF-8 and other needed localizations in /etc/locale.gen, and generate them with:
locale-gen
Set the LANG variable in /etc/locale.conf accordingly, for example:
LAND=en_US.UTF-8
If you set the keyboard layout, make the changes persistent in /etc/vconsole.conf (does not keep the keyboard layout for WM’s or DE’s):
KEYMAP=de-latin1
Hostname Create /etc/hostname:
myhostname
Add matching entries to /etc/hosts:
127.0.0.1 localhost ::1 localhost 127.0.1.1 myhostname.localdomain myhostname Network configuration Here: netctl Alternatives: wpa_supplicant
Copy the example file from /etc/netctl/examples/wireless-wpa to /etc/netctl/mywirelessnetwork and edit the file accordingly.
Root password Set the root password:
passwd
The system is now ready for boot.
Locking the root account:
passwd -l root
Unlocking the root account:
sudo passwd -u root
User management Add a new user:
useradd -m -g initial_group -G additional_groups -s login_shell username
For later convenience with sudo and xbacklight we set the initial_group to wheel and the additional_groups to video. Shells used can be /bin/bash or /usr/bin/zsh.
Package manager Here: yay
Alternatives: yaourt, trizen
Privilege escalation To temporary grant root privileges to a user, use sudo:
pacman -S sudo
To configure sudo to allow the just added user to escalate privileges, we uncommented a line from /etc/sudoers:
%wheel ALL=(ALL) ALL
Display server Here: Xorg
Alternative: Wayland
To install Xorg:
sudo pacman -S xorg-server
Graphics driver Here: Noveau
Alternatives: Nvidia, AMD
To install the noveau drivers:
sudo pacman -S mesa
The noveau drivers should be loaded on default.
For Nvidia drivers, you first have to determine what graphics card you use by issuing the command lspci -k | grep -A 2 -E "(VGA|3D)".
… follow the guide linked above
NEEDS EDITING
Window manager Here: i3-gaps
Alternatives: i3, bspwm, Awesome, HerbstluftWM
Installing i3-gaps:
yay -S i3-gaps
Sound Here: ALSA, PulseAudio
ALSA is already installed on Linux.
To install PulseAudio:
sudo pacman -S pulseaudio pulseaudio-alsa
To install bluetooth support and an equalizer, use sudo pacman -S pulseaudio-bluetooth pulseaudio-equalizer respectively.
Time synchronisation Here: systemd-timesyncd
systemd-timesyncd is part of the systemd suite, which is installed by default.
To enable systemd-timesyncd:
timedatectl set-ntp true
DNS Security Here: DNSSEC, DNSCrypt
To install DNSSEC and DNSCrypt:
sudo pacman -S ldns
yay -S dnscrypt-proxy-go
NEEDS EDITING
Firewall Here: Ufw
To install Ufw:
sudo pacman -S ufw
NEEDS EDITING
MouseAcceleration Here: with Xorg
To disable mouse acceleration, follow this guide.
Improving performance Improving performance
NEEDS EDITING
Solid State Drive SSD
NEEDS EDITING
Fonts To install fonts, use pacman or the AUR.
To refresh the font-cache:
fc-cache -fv
To list all fonts:
fc-list
Monitor brightness By default, non-root users are not allowed to change the brightness of the screen.
To change this, we add the user to the video group and allow this group to modify the file for the brightness.
The file is located in /etc/udev/rules.d/90-backlight.rules:
SUBSYSTEM=="backlight", ACTION=="add", RUN+="/bin/chgrp video %S%p/brightness", RUN+="/bin/chmod g+w %S%p/brightness" Another option is using brightnessctl if xbacklight is not working. If all else fails, there is still the option to manually change the brightness (or put it in a script/write a program) with, for example: printf "<value>\n" > /sys/class/backlight/intel_backlight/brightness (path to brightness file might be different for you).
Dipslay Manager Here: None
Alternatives: lxdm, LightDM
To install lxdm or LightDM:
sudo pacman -S lxdm
sudo pacman -S lightdm
To enable a display manager on boot:
sudo systemctl enable displaymanager.service
If you do not want to use a Display Manager, you can choose to execute startx on login in a terminal.
First, make sure your ~/.xserverrc is properly configured:
#!/bin/sh exec /usr/bin/Xorg -nolisten tcp "$@" vt$XDG_VTNR Second, if you are using bash add the following lines to your ~/.bash_profile or if you are using zsh add them to your ~/.zprofile:
if [[ ! $DISPLAY && $XDG_VTNR -eq 1 ]]; then exec startx fi
If startx does not start your window manager/desktop environment, look for it with the help of the which command and pass the given path as a parameter of startx your ~/.bashrc or ~/.zprofile.
If you want to be automatically logged in on boot, edit the file /etc/systemd/system/[email protected]/override.conf:
[Service] ExecStart= ExecStart=-/usr/bin/agetty --autologin username --noclear %I $TERM OfflineIMAP setup
https://wiredspace.de/blog/offlineimap/
Tue, 19 May 2020 09:00:00 +0100[email protected] (Thomas Böhler)https://wiredspace.de/blog/offlineimap/offlineimap offlineimap is a commandline util allowing you to sync a local repository with an online one via the E-Mail syncing protocol IMAP.
configuration After installing offlineimap you will want to create a config file. This is by default located in ~/.offlineimaprc, with further configurations in ~/.offlineimap/.
The configuration syntax of offelineimap’s config file is pretty simple. Here is a sample configuration file you can change to your liking:
[general] # list of accounts to be synced, seperated by a comma accounts = {account1} starttls = yes ssl = yes # Path to file with arbitrary Python code to be loaded pythonfile = ~/.offlineimap/offlineimap.py [Account {account1}] localrepository = {account1}-local remoterepository = {account1}-remote [Repository {account1}-remote] auth_mechanisms = LOGIN type = IMAP starttls = no remoteuser = {username} remotehost = {hostname} # remote port should already be 993 by default #remoteport = 993 sslcacertfile = /etc/ssl/certs/ca-certificates.crt ssl_version=tls1_2 createfolders = False # Decrypt and read the encrypted password remotepasseval = get_pass_{account1}() [Repository {account1}-local] type = Maildir localfolders = ~/mail/{account1} Make sure to change {account1} with the name of your account. This can be any identifier you want, it doesn’t have to be the user-/hostname of you E-Mail service.
The variable pythonfile specified a python file from which you can call functions. I set up a function to retrieve my password from a gpg-encrypted file. This is called in the remotepasseval variable, whose content is the function to be called.
A thing to keep in mind: The offlineimap configuration file does not evaulate environment variables. So things like $HOME and $XDG_CONFIG_HOME will not work.
Another thing is that you have to keep comments on a seperate line. Having a comment on the same line as a variable/command will make that line invalid.
using gpg to get the password for a mailbox gpg encrypted password file As mentioned before, I use gpg to retrieve the password for an inbox. I will assume you are somewhat familiar with gpg and have a private key. If you are not familiar with gpg or have a private key, you can read up about it on the ArchWiki page for GPG.
First you will need to encrypt your password using gpg with the recipient set as yourself. The easiest way to do this is with this command: gpg --encrypt -o ~/.offlineimap/.offlineimap-{account1}.gpg -r {private-key-id} - (make sure to replace {account1} with the name of the account you specified above (this is to avoid confusion with multiple accounts), as well as replace {private-key-id} with the ID of your private gpg key). This will read the password from stdin, and since you are not passing it anything to stdin it will wait for your input on the commandline. This way you will not have to go into your history to delete the occurance of your password in plaintext. You will also not have to write the password to a file beforehand and delete it that way (which is insecure since the contents wont be written with 0’s). Type in your password and hit Ctrl-d to send an EOF, signaling gpg that the end of the input is reached. You will now have a file called .offlineimap-{account1}.gpg in your ~/.offlineimap/ directory.
gpg-agent passphrase caching To avoid having gpg-agent pop up every time you want to check your E-Mail (if you’re doing this in a script, for example), you can change a few things… gpg-agent caches passphrases for private keys. It’s configuration file can be found in ~/.gnupg/gpg-agent.conf. Put these 2 lines in there:
default-cache-ttl 600 max-cache-ttl 999999 The first line, default-cache-ttl, will tell gpg-agent how long to cache passphrases before they are being used again… The second line, max-cache-ttl, will tell how long gpg-agent should cache passphrases in total, disregarding the default-cache-ttl… Both times are given in seconds. This configuration will only ask for your passphrase if you haven’t used your private key for 600 seconds (or 10 minutes).
python script The python script you will need is taken from the ArchWiki page for offlineimap. It looks like this:..
#!/usr/bin/env python2 from subprocess import check_output def get_pass_{account1}(): return check_output("gpg -dq ~/.offlineimap/.offlineimap-{account1}.gpg", shell=True).strip("\n") Change {account1} to the account name you are using to avoid confusion when using multiple mailboxes. If you have more than one account, add another function to the script with the other account name instead of {account1}. This way you will easily be able to get the different passwords by calling the different functions in your .offlineimaprc.
further .offlineimaprc configuration To use the newly created gpg encrypted password file and the script in offlineimap you need to define the pythonfile and remotepasseval variables… I put my python script (named offlineimap.py) and my gpg encrypted password in ~/.offlineimap/ for easy access. If you put yours somewhere else, make sure to set the paths in the script and gpg command accordingly… After defining pythonfile in the configurations general section with the path to your python script and the remotepasseval in the account section for your account with the function to get the password for your account you should be all set.
using neomutt to read mail If you want to use neomutt to read the mail offlineimap keeps synced between your local repository and the defined servers you will have to do the following:.. Add this section in your .offlineimaprc:
[mbnames] enabled = yes filename = ~/.config/neomutt/mailboxes header = "mailboxes " peritem = "+%(accountname)s/%(foldername)s" sep = " " footer = "\n" This will make sure your local repository is configured for neomutt to be able to read the contents, since it wont be able to do that by default… To tell neomutt where to look for files, add this to your neomuttrc (or another file you have defined your account in):
# IMAP: offlineimap set folder = "~/mail" source $XDG_CONFIG_HOME/neomutt/mailboxes set spoolfile = "+{account1}/INBOX" set record = "+{account1}/Sent\ Items" set postponed = "+{account1}/Drafts" You should be all set now to browse your local mail repository with neomutt.