<![CDATA[Dave Jansen]]>https://davejansen.com/https://davejansen.com/favicon.pngDave Jansenhttps://davejansen.com/Ghost 5.130Tue, 17 Mar 2026 17:38:50 GMT60<![CDATA[Using Visual Studio Code Flatpak with 1Password for SSH and Git signing]]>https://davejansen.com/using-vscode-flatpak-with-1password-ssh-git-signing/644f12e13c41a200011c20bcMon, 01 Apr 2024 05:50:00 GMT

1Password is a fantastic tool and service that I love to use. Its Linux version has been getting a lot of love, with it even supporting managing your SSH keys and Git signing too.

I use this functionality all the time, and while it unfortunately is not yet possible to install 1Password as a Flatpak and have it work in an integrated way with your system just like a system-installed version would, at least I found a way to make use of the Visual Studio Code (VSCode) Flatpak version and have it actually work with your system-installed 1Password.

💡
This guide acts as a companion to my other guide on how to set up Visual Studio Code with 1Password's SSH and Git signing functionality. The steps outlined in this guide are specific to getting the VSCode Flatpak version to work with 1Password, and work alongside what is explained in my other guide.

I will include the necessary Flatpak command-line interface commands in this guide that you can copy/paste. But if you prefer you can also use something like the Flatseal app to do it via a GUI.

In a nutshell we need to;

  • Give VSCode access to the 1Password agent socket path, your .ssh directory, and your .bashrc (and optionally .bash_profile) files,
  • Ensure the necessary environment variables are set within VSCode's environment and integrated terminal, which also requires us to;
  • Create a wrapper bash shell script for VSCode to use instead of ssh directly

Let's walk through each step now.


1Password agent socket access

This one is a nice and easy step, simply give VSCode (read-only) access to the directory that contains the agent.sock file (~/.1password by default), and you're done:

❯ flatpak override --user com.visualstudio.code --filesystem=~/.1password:ro

Preparing your .bashrc file

There are two places where we'll want to ensure the SSH_AUTH_SOCK variable is set to point to 1Password's agent socket. In my testing I was unable to get VSCode to make use of this environment variable if it is set via Flatpak. So instead we'll rely on having this value be set by adding it to your .bashrc file.

💡
If you make use of a different shell within VSCode, ensure you adapt the following to whichever way your particular shell handles this. Many tend to include .bashrc files anyway, so it might just work as-is, but please refer to the documentation of your shell to make sure.

Using your favorite text editor, open up your ~/.bashrc file or create one if it does not already exist. In it, add the following:

export SSH_AUTH_SOCK="~/.1password/agent.sock"

Let's also make sure that VSCode has access to the file by giving it specific permission to access it. This is technically not a necessary step if you plan on keeping the default setting that gives VSCode full access to your entire host system enabled, but might still be useful even if it's just as personal reminder of what files VSCode must have access to.

❯ flatpak override --user com.visualstudio.code --filesystem=~/.bashrc:ro

I'll list out default settings you can optionally override a bit further into the article.


Creating a wrapper script for ssh for VSCode

As mentioned before, the way the VScode Flatpak works has it not adhere to environment variables set through Flatpak. This also means that if you wish to make use of the Remote SSH extension, it won't try to use your 1Password agent for authentication. We'll fix this by creating a very simple bash script wrapper for SSH.

Create a new file in your ~/.ssh directory and call it something like vscode-ssh, and give it the following contents:

💡
You can place this script anywhere you like, but I've chosen to save it within the ~/.ssh directory as it's related and a directory we'll be giving VSCode access to anyway.
#!/usr/bin/bash
SSH_AUTH_SOCK=~/.1password/agent.sock /usr/bin/ssh "$@"

As you can see, all this file really does is ensure SSH_AUTH_SOCK is configured to what we need it to be.

Ensure the file has the execution flag set:

❯ chmod +x ~/.ssh/vscode-ssh

Now we'll also give VSCode access to the ~/.ssh directory:

❯ flatpak override --user com.visualstudio.code --filesystem=~/.ssh

We need to set the remote.SSH.path setting in VSCode to point to this wrapper script. You can use the settings GUI (File > Preferences > Settings) to set this value or modify the JSON directly if you prefer. Make sure to use an absolute path to this wrapper script, so it should end up looking something like this:

{
   "remote.SSH.path": "/var/home/davejansen/.ssh/vscode-ssh"
}

(Optional) Your global Git config file

If you make use of a global Git config file, you may also want to give VSCode specific permission to this (and all related) file(s):

❯ flatpak override --user com.visualstudio.code --filesystem=~/.gitconfig --filesystem=xdg-config/git
💡
In my above example I also add permission to read files in ~/.config/git, which might be useful in case you make use of an allowed_signers file, or have additional and/or conditional config files stored there. You can adapt this to your personal liking, of course.

(Optional) Disable or remove default settings

If you prefer to keep your Flatpak installed applications as sandboxed as possible, now might be a good time to remove some default settings the VSCode Flatpak comes with that are not necessary.

Remove default "host" access

If you prefer to limit filesystem access of your installed Flatpak applications only to what you specifically want them to have access to, it'll probably be nice to remove VSCode's default "host" access. If you have followed along and added the specific filesystem allow settings described in this guide, your 1Password integration should keep working.

❯ flatpak override --user com.visualstudio.code --nofilesystem=host

If you do this and work with projects/files locally on your system, you'll have to give specific access to where you store these projects. In my case I usually keep projects in ~/Projects, so I've added specific permission to access files here like so:

❯ flatpak override --user com.visualstudio.code --filesystem=~/Projects

Remove other potentially unneeded permissions

We can also remove the ssh-auth socket permission as this is not being used this way.

❯ flatpak override --user com.visualstudio.code --nosocket=ssh-auth
💡
While not directly related to this guide, depending on your project needs you could also remove the pulseaudio socket and device=all access flags if you wish.

Using Visual Studio Code Flatpak with 1Password for SSH and Git signing
Screenshot of Flatseal, showing all the filesystem overrides we have just made for Visual Studio Code.

Closing thoughts

And this should be it! You should now be able to use 1Password to both authenticate connections through the Remote SSH extension, as-well as any commands you may run in the integrated terminal. Git commit signing should also work if you have set this up following my other guide.

While this isn't as straight-forward as it could be, my hope is that in time necessary portals or so end up showing up that make it possible for 1Password to bring their Flatpak version to feature parity with their system-installed variant.

While it does take a few steps, I am glad that it's actually possible to make this work, as that's one fewer system override on my I need to worry about in my atomic desktop environment.

I hope this is helpful to you, too!

Thank you.

]]>
<![CDATA[Publish to Netlify using Gitea or Forgejo Actions]]>Netlify's built-in auto-deploy support is quite nice, but is sadly limited only to hosted versions of GitHub, GitLab, Bitbucket, and Azure DevOps. Self-hosted support of some of these is further limited to their Enterprise plan only.

Fortunately with Gitea (and Forgejo) Actions we can still enjoy automatic deploys

]]>
https://davejansen.com/publish-to-netlify-using-gitea-actions/64a2547b2abf6c00010bfe7cSat, 09 Dec 2023 04:02:20 GMT

Netlify's built-in auto-deploy support is quite nice, but is sadly limited only to hosted versions of GitHub, GitLab, Bitbucket, and Azure DevOps. Self-hosted support of some of these is further limited to their Enterprise plan only.

Fortunately with Gitea (and Forgejo) Actions we can still enjoy automatic deploys to Netlify, by writing up a relatively simple Action. This should work whether you self-host Gitea or Forgejo, or make use of one of the hosted offerings like Codeberg or something your favorite community of choice has set up.

Let's get started.

💡
This guide assumes you already have a working Gitea or Forgejo environment with Actions enabled and at least one runner set up. If you've not yet done so, please follow Gitea's or Forgejo's documenation first.

I have prepared a basic action that you can pretty much copy-paste and use in your own Node-based projects. The biggest part that may need adjusting is whatever build steps your particular website or -app needs, and which specific Netlify deploy options you might want to set.

Let's take a look at the complete yaml workflow file, and then go over the important parts below.

name: 'Deploy to Netlify'

on:
  push:
    branches:
      - main

jobs:
  deploy:
    name: 'Deploy to Netlify'
    steps:
      - uses: actions/checkout@v3
        name: Checkout
      - name: Build
        run: |
          npm --version && node --version
          npm ci --no-update-notifier
          npm run build
      - uses: https://github.com/nwtgck/[email protected]
        name: Deploy
        with:
          publish-dir: './dist'
          production-deploy: true
          github-token: ${{ secrets.GITHUB_TOKEN }}
          deploy-message: "Deployed from Gitea Action"
          enable-commit-comment: false
          enable-pull-request-comment: false
          overwrites-pull-request-comment: true
          enable-github-deployment: false
        env:
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
        timeout-minutes: 1

You can customize this file and add to your project by placing it inside the .github/actions directory of your repository. Gitea also supports using the .gitea directory, and Forgejo supports the .forgejo directory name.

Let's go over some of the specifics now.


On

on:
  push:
    branches:
      - main

This example configuration is configured to react to two types of interactions; whenever a new commit is pushed to the main branch. At the time of writing the workflow_dispatch manually triggered option is not yet available. Once that functionality is made available, one might consider adding it here too.

As of Gitea version 1.21.0, the cron option is available too now, a very powerful option for scheduled actions that really open up what you can do with Actions.

Common Steps

  - uses: actions/checkout@v3
    name: Checkout

A fundamental step in most every action and mostly self explanatory; this checks out the current repository so that you are able to work with your project's source code in subsequent steps.

  - name: Build
    run: |
      npm --version && node --version
      npm ci --no-update-notifier
      npm run build

An example build step for a NodeJS project. The first line simply prints out both npm and node versions, which can be useful when debugging Actions logs.

The second line runs the "clean install" variant of install. A full explanation on the differences is a bit outside the scope of this article, but the most important difference is that clean-install never writes/updates package.json nor package-lock.json and instead errors out if what is defined in these files does not match with one another.

The last line is a placeholder for whatever command you'd need to run to build your project.

The Netlify publish step

  - uses: https://github.com/nwtgck/[email protected]
    name: Deploy
    with:
      publish-dir: './dist'
      production-deploy: true
      github-token: ${{ secrets.GITHUB_TOKEN }}
      deploy-message: "Deployed from Gitea Action"
      enable-commit-comment: false
      enable-pull-request-comment: false
      overwrites-pull-request-comment: true
      enable-github-deployment: false
    env:
      NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
      NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
    timeout-minutes: 1

With great thanks to Ryo Ota for writing this action and making this all possible, let's go over the values we may want to set here to make deploys work.

Required values

  • publish-dir — The (relative) path to the directory that should be published to Netlify, usually the one containing the results of your build steps. E.g. ./dist.
  • NETLIFY_AUTH_TOKEN — Your Netlify personal access token. You can generate this from your Netlify user settings page.
  • NETLIFY_SITE_ID — The Site ID (UUID) Netlify has assigned to your site. This can be found by visiting the "Site configuration" view of your site on Netlify. It's listed right in the "Site information" section as "Site ID."

Additional settings

The above example has some additional settings defined. These can be fully customized to your liking, but keep in mind that this Action is originally written with GitHub in mind, some some functions might not work in a Gitea context. I recommend reading through this Action's README so you can decide which values might work well for your specific needs.

Publish to Netlify using Gitea or Forgejo Actions

Closing thoughts

While it takes a little more effort as compared to just using Netlify's dashboard to connect up with a GitHub repository, it's fortunately not all too challenging to get things working this way from your (self-hosted) Gitea or Forgejo instance.

There's a lot more you can do with Gitea Actions. Feature-wise it's very close to GitHub Actions, though not exactly the same. Some key differences are described here and are good to keep in mind when writing up your own Actions.

I hope this is helpful to some of you out there and can help you at least consider making the switch to using an option other than GitHub for your own, your clients', or employer's needs.

Thank you.

]]>
<![CDATA[Chuwi MiniBook X and Linux]]>Never did I feel this reluctant to write up my impressions of a device.

Chuwi is a China-based manufacturer of computers, primarily focused on smaller desktops (ie. the "NUC-like" category), along with laptops and tablets. Generally speaking they tend to focus more on affordable hardware rather than the

]]>
https://davejansen.com/chuwi-minibook-x/64e052217050aa0001301e9aSat, 19 Aug 2023 07:54:43 GMT

Never did I feel this reluctant to write up my impressions of a device.

Chuwi is a China-based manufacturer of computers, primarily focused on smaller desktops (ie. the "NUC-like" category), along with laptops and tablets. Generally speaking they tend to focus more on affordable hardware rather than the highest-end or premium options.

One of their products is called the MiniBook, an 8" ultra-compact laptop, not unlike the GPD Pocket 2 from a few years ago. I have a fascination with these types of devices that try to find the very limits of pocketability and usability, and after having sold the GPD Pocket 2 years ago I was curious to see how well Chuwi's option would work with Linux.

When I had decided to bite the bullet and place an older, the company quickly reached out to me that the product had, unfortunately, been discontinued and is no longer available. This was really unfortunate, but what can you do.

Its slightly larger brother, the MiniBook X, was still available. At 10.8" it definitely moved beyond literal pocketability, but it still looked like it would be small  and very usable as a secondary (or tertiary) companion to your main work machine.

Chuwi MiniBook X and Linux

Specifications

One of the first things you'll notice when taking the MiniBook X out of its box is just how positively tiny a 10.8" device really is. The second thing you'll probably notice is it's very nice display with punch-out camera. Quite an unusual sight on a laptop, and if you were to stick with Windows you'd also soon realize why that is; Your "My Computer" icon will hide nicely behind the camera.

Speaking of the display, it's a nice, crisp, 2560x1600 (16:10) display with vivid colors. I don't know how accurate its color representation may be, but it certainly looks nice.

Internals wise we're working with a low-powered quad-core Intel Celeron N5100 with a TDP of 6 watts. The laptop is passively cooled, there are no moving parts to be found in this laptop. More on heat later.

The device comes with 12GB of LPDDR4 RAM, a somewhat peculiar amount, along with a 512GB SATA SSD.

Model Chuwi MiniBook X (Pre-2023 version)
CPU Intel Celeron N5100, 4 cores @ 1.1GHz (2.8GHz boost)
RAM 12GB LPDDR4X
Displays 10.5" 2560x1600 IPS
Graphics Intel UHD Graphics
Storage 512GB M.2 NVMe
Connectivity WiFi 6 (802.11ax) + Bluetooth 5.1
Battery 28.8WHrs

The body is fully made of aluminium and feels very nice, there's no flex to be found. This is especially nice as the 360 "yoga" hinge is possibly a bit too sturdy, so you need to put some force into it if you want to flip the thing all the way around. On the bright side, that means the display will stay put wherever you place it, even at extreme angles.

Chuwi MiniBook X and Linux

The keyboard feels absolutely fantastic to type on. Even though it's smaller than full-size —a necessity at this physical size— I found it to be incredibly usable, allowing me to type at my full typing speeds with ease. It's one of the nicer laptop keyboards I've had the pleasure of using. Which, sadly, adds to why it was so difficult to write this up. More on this in a bit.

Lastly, the battery is perhaps the biggest downside of this unit; at just 26.6Wh capacity, even with such a low-powered CPU you can't really squeeze anything beyond 4-5 hours of usable time out of it, less if you need to push the hardware in any way.

They definitely could've fit in more battery in the device, but sadly they chose not to. Internally there are suggestions that show they might've had plans to add optional add-ons to the unit — a 4G modem or secondary SSD, perhaps. These are left unpopulated though, so there's relatively a lot of empty space in there.

If this device had actual all-day battery life, it could've really made this a fantastic choice. Well, perfect besides the one, even bigger flaw I'll mention in just a moment.

Chuwi MiniBook X and Linux

Linux

As you may or may not know, I don't use Windows at all. So whenever I get a new device, I always start by wiping Windows off of it and putting Linux on it. The past few years my primary choice for this has been Fedora, and so with the MiniBook X that's also what I chose.

The out of the box experience with the Live USB is actually great, with pretty much every single part working just as you'd expect. Only the display auto rotation when flipping the screen around wouldn't work, but this is mostly due to GNOME's decision to stick with what's technically right, rather than what's practically good.

While it's not a show-stopper if something isn't quite working right in the Live USB environment, it does give extra confidence if things work as well as they did with the MiniBook X. So I proceeded to wipe Windows 11 off of its internal SSD and install Fedora onto it.

A nice happenstance of the GNOME UI at 2x scale (I had a hard time reading the small text at 1x, this is definitely a retina-like display) is that the punch-out camera isn't really blocking anything important; just the "Activities" button in the top-left hand side.

Fortunately for me I always disable/hide that button using a GNOME extensions anyway. Instead of fully hiding it, I used this extension to replace its contents with a few spaces. That shifted the rest of the menu UI to just next to the punch-out camera, making it all look very slick.

Chuwi MiniBook X and Linux

The problem

And here we get to the reason why I felt so reluctant to write this up. This device was pretty much exactly what I was looking for. Sure, it was a bit more expensive than I think is reasonable for this form factor, and the battery should've been larger, but these are issues that you can choose to live with.

What you cannot live with, however, is the keyboard not working right. Let me explain.

Under Windows this issue does not seem to show up much, or if it does, I suspect Chuwi's driver or perhaps just Windows handles it in a way that it doesn't really result in anything more than maybe a key momentarily feeling "sticky" or unresponsive.

Under Linux, however, the behavior is different, and results in at times a certain key getting "stuck" in a way that pressing it again may "unstick" it. Other times a key gets "stuck" in such a way that the laptop thinks you're hammering down on that one key and nothing you do can make it stop doing that — short of rebooting or putting your laptop to sleep.

I tried everyhing I could think of to try and see if this could be fixed. From trying different kernel versions to booting up with different kernel parameters to even patching the Kernel myself with different i8042 timeouts, just to see if anything could help. A thread in Chuwi's forum has other people reporting the same issue with varying degrees of impact.

I even opened up the device and applied interference protecting tape between the battery and keyboard ribbon cable, as one theory was that the heat coming off of the battery could cause interferance. While that might have helped a tiny bit, I was still able to reliably re-introduce the issue just by using the laptop for a bit.

Chuwi MiniBook X and Linux

I also attempted to run FreeBSD on the MiniBook X to see if it would yield any different results. This was somewhat challenging as the chipset the MiniBook X has is quite new, and at least at the time of my tests wasn't fully supported yet under FreeBSD. Regardless, with a bleeding edge release and external monitor I was able to try out the keyboard.

Surprisingly, with FreeBSD it did behave differently, but not in a way that's better; input was extremely slow. Almost like you're typing over a very flaky and slow SSH connection. You could type out a few words and it would only have registered the first maybe two or three characters, skipping over many others you entered too.

This, to me, felt like more proof that the issue was with the hardware implementation surrounding the keyboard. I don't know if it's the keyboard controller, or (just) with its pulse frequency, or something else in that general direction.

All I know is that so far no-one has been able to find a way to have the MiniBook X keyboard behave reliably, sadly.


I had reached out to the company and tried to get someone internally to help by looking into this. They did end up saying an engineer looked into it, but as the problem did not occur (or not as visibly, I suppose) under Windows, and that being the only operating system they officially support, it didn't really go anywhere.

Chuwi MiniBook X and Linux

The original box the MiniBook X comes in lists out some specs on its side. It also has two checkboxes, one for Windows, and one for Linux. I had hoped that they would be willing to spend a bit more effort on this to see if it was fixable with a firmware update or so, but at the time at least the company seemed busy figuring out some bigger challenges internally.

Fixing a compatibility issue with something they don't even officially support fell to the bottom of the list of priorities, and that's where it seems to have remained.

Chuwi MiniBook X and Linux

And this is where it stands today. I still have the MiniBook X in my bookcase. I sort of am half hopeful that a solution will magically present itself one day. I've tried a few more times to see if more recent kernel releases would yield any different results, but sadly the problem persists.

This is actually a really fantastic device. It's nicely built, feels solid, is a dream to type on (from a hardware point of view, that is), and could be a solid choice for writers, casual browsers, or DevOps or system administrators looking for a lightweight device to carry with them that they can use to remote shell into servers or so.

The fact that it all feels so nice makes the keyboard software issue sting so much more. It's so close, but the one issue it has makes the whole thing unusable.

2023 Refresh

Chuwi released not one but two 2023 refresh models of the MiniBook X. Both lose the punch-out camera display and, speaking of the display, based on user reviews the new model's display is noticeably worse. The only other difference is that the 2023 model now has active cooling, and there is a model with Intel N100 CPU. This should bring a nice performance boost as compared with the N5100 models.

I do not have first-hand experience with either 2023 model. The contact I had within the company has since moved on to work elsewhere, and I've not seen any information on whether or not the 2023 model modified anything with how the keyboard works. I'm not willing to risk purchasing another device that might just not work. Not to mention getting a noticeably worse display does not seem very appealing to me, and its battery is still (too) small.

💡
Update: A reader emailed me and shared a link to a Reddit post about the 2023 model. A comment under that post does seem to suggest that a similar if not identical problem might exist on the 2023 model too. Please proceed with caution if you are considering one of these devices with the intention of using Linux on it.

Sadly my attempts to work with the company to try to figure out and solve whatever was causing this keyboard issue failed. But maybe a smart person out there  with an oscilloscope can figure out exactly what is causing this.

Wouldn't that be nice?

]]>
<![CDATA[Synology DS920+ Final Impressions]]>After almost two years of having used a Synology DS920+ as my NAS and light home server for a few services, I thought it would be good to take a look back at how it had treated me these past two years.

Those who may have read my first impressions

]]>
https://davejansen.com/synology-ds920-final-impressions/64756a103c41a200011c23d6Sat, 19 Aug 2023 05:18:32 GMT

After almost two years of having used a Synology DS920+ as my NAS and light home server for a few services, I thought it would be good to take a look back at how it had treated me these past two years.

Those who may have read my first impressions post might already have some idea where this is going, but to put the spoiler out of the way: I have replaced the Synology NAS with a NAS I have built myself, and very happily sold the DS920+. Read on if you're interested in some of the details that have ultimately led me to change course.

Synology DS920+ First Impressions
Get ready to wait a lot…
Synology DS920+ Final Impressions

What was I looking for?

The list of ;what I was looking for when I opted to try out Synology's offering is still accurate. In short, it all boiled down to a desire to have something that "just works." For my day to day work I already spend enough time debugging and problem solving, the last thing I was looking for was yet another device I'd have to fiddle with to keep running as it should.

This is, in essence, exactly what Synology is offering. Their products rarely have specs worth writing home about, especially at their asking prices. But their offer is a cohesive package that gets out of your way, it just works. Or, in theory at least.

Minor issues everywhere

From the get-go I was running into small nuisances or straight-up problems with the Synology box. Some of these would just sort of solve themselves over time, like having the device be barely usable while it spends the first month plus indexing images and videos, with its status indicator not really offering any insight into how much longer it might take or what it really is doing.

Some of the other issues I ran into resulted in some blog posts that attempted to find ways to fix limitations Synology had kept in place. While neck-deep trying to almost reverse engineer proprietary Synology APIs to try to figure out how a way to get photos that exist to actually show up in Synology's UI, I was wondering why I had to do any of this. I went the more expensive Synology route precisely to avoid having to deal with such things, after all, yet here I was.

Other problems still were ones I never really was able to solve. For example, over the two years I've had the Synology NAS suddenly disappear from my network for no reason. It was still running, and never reported any issues, it just suddenly.. disappeared. Usually It'd also just as suddenly show up again 10-20 minutes later. What caused this or how this could be solved, I have no idea.

One thing I do know for sure; I've never had this with any other network attached device or computer, and this would be the perfect example of something I didn't want to deal with and let me to go the Synology route in the first place.

Synology's proprietary layers get in the way

One of the key asks I had was that I'd be able to run Docker containers with easy. In Synology's defense, this was almost exactly what their solution offers. I did have to fiddle with permissions to be able to more easily work with it through the command line interface, but that was a small hurdle. At least it came with docker-compose too, which is what I prefer to use for most containers I run.

I never ended up making full use of Synology's virtual machine functionality as the hardware quite simply was too limiting. Hardware aside, while they make use of qemu and related behind the scenes, they've opted to add a proprietary layer on top of that that prevents you from configuring any settings that aren't also exposed through their UI. This prevented me from from being able to conveniently run a certain virtual machine, even though the device was actually capable of running it.

And herein lies one of the fundamental challenges you'll run into when working with Synology hard- and software; If your needs go even the slightest bit outside what Synology considered acceptable, you're immediately running into arbitrary and artificial limitations. It either is "not possible" according to Synology, locked behind their more expensive products instead of whatever model you have picked, or simply something their proprietary layers get in the way of.

Hardware compatibility

Speaking of artificial limitations; Synology maintains a list of compatible hard drives. This is done under the guise of ensuring compatibility and system stability. Realistically speaking, this is of course also to encourage businesses to purchase Synology's own, more expensive offerings rather than the usually cheaper alternatives that, so long as you purchase the right tool for the job, would work just as well.

Another limitiation that is artificially enforced by Synology on models like the DS920 is the ability to use either one or both of its NVMe SSD slots for storage. Having solid state storage for things like Docker containers or VMs is actually quite useful and arguably better than having these on traditional hard drives.

For almost the entire time I had the Synology NAS, I did set up storage on NVMe drives. I even upgraded from DSM 6.x to 7.x with this in place, and not once did having this set up cause any issues. I was able to have my Docker containers run off of solid state storage, keeping all those small reads and writes away from the hard drives.

Power consumption

One of the key selling points of something like the DS920+ is its low power consumption. The exact power consumption of course depends on what software you run on the device and with how many drives. But generally speaking, the benefit of having such underpowered hardware is that it doesn't really consume much, even at full tilt.

Even with the few Docker containers I had on it running off of solid state storage, the hard drives were never able to park, reducing power consumption. Something was always  poking those drives. Attempts to figure out what was causing it and how I could solve it yielded little useful results. I just ended up being able to get them to park once I fully disable most services on the NAS – kind of defeating the purpose of having the thing in the first place.

So in the two years I had it running, it always consumed around 60Wh. I know it could go down all the way to 11Wh, but as mentioned before, that was only possible by turning the entire thing effectively into a very expensive hard drive dock.

But that's what this is? It's is a NAS, a Network-attached storage device

For me, I tend to judge products relative to their price. By that I mean if something was particularly affordable, I can accept some minor woes more easily, as it's overall still a great deal.

The Synologoy DS920+ is positioned in a way that it cannot deliver what it (over-)promises. As a direct-attached storage device (DAS) that cannot do anything beyond storing and serving data over the network, this would be a fine offering at maybe $200-250.

As a NAS and at at almost $700, however, it's an underpowered device that cannot reasonably do all that it advertises it can. Whatever you run on this that you might depend on could in the blink of an eye become unresponsive if a random photo index triggered.

Synology DS920+ Final Impressions

Reducing its responsibilities

Starting to feel somewhat desperate, I tried to reduce its responsibilities as much as I could. I had an older Dell Wyse thin client that I originally got for a different project that didn't pan out. So I installed Fedora Server on it, and migrated every single Docker container over to it. The new mini server, dubbed Wyseguy, mounted up necessary volumes from the Synology using NFS mounts.

I even disabled additional Synology features photo indexing and whatnot, replacing these with solutions like Photoprism that also ran on the tiny, ultra low powered Wyseguy.

This meant the Synology was basically reduced to mostly running as a DAS, with the exception of Jellyfin that I kept running on the Synology. Mind you, my home was 100% 1080p at the time, and Jellyfin never transcoded any content. It was always just straight passing everything through, so the overhead of running Jellyfin was very small.

It's kind of ridiculous to have spent $700 on a glorified hard drive dock.

It's kind of ridiculous to have spent $700 on a glorified hard drive dock, but at this point I was desperate to try to find a way to just have the damn thing work without causing me so many headaches.

This actually ended up working alright for a while. Granted, this added an extra 11Wh of power consumption in my network closet. But if it worked and would allow me to stop thinking about this all, so be it.

But it also further exemplified the issue; this tiny, fanless, old, ~6-9Wh thin client runs these few docker containers better than the Synology did, without breaking a sweat. No sudden slowdowns. No sudden crashes. No device suddenly disappearing off of the network for no discernable reason.

And this was with its stock 4GB RAM (!) even. I did end up upgrading this just to give it a bit more breathing room.

Synology DS920+ Final Impressions

When it rains..

And then the Synology's power supply kicked the bucket.

Granted, this could happen to any device. And again, on its own, this really wouldn't be a huge deal. But the bucket was already near-flowing over at best. With its power supply giving up the ghost now, after all that I had already gone through with this thing, made the one realization I've been reluctant to fully accept unavoidable; I cannot trust the Synology NAS.

I cannot trust the Synology NAS.

Ultimately, the problem wasn't (just) Synology or the DS920+ specifically. It was the expectations I had for it, how it failed to live up to what it advertised as being capable of. And how it was accompanied by all these other (minor) nuisances and issues.

My requirements, while easily matching with what Synology claims the DS920+ can do, were too much for their under-powered hardware and dumbed-down and proprietary software layers. I simply wasn't the right target audience for this product.

I simply wasn't the right target audience for this product.

I, too, know enough about computers and hardware to know that their asking price was way too high judged purely on a hardware level. The price could also not be justified by its reliability, as it has been anything but reliable for me these past two years.

It simply was a mistake, and it was time for me to just accept that and finally do something about it.

I did quickly purchase a new power supply just so I could get access to my data again. Along with that I started making plans for what would ultimately replace the Synology.

Synology was a reasonable choice for me to consider two years ago, but I know now that it's just not able to do what it advertises.


Synology DS920+ Final Impressions

I don't want this conclusion to read as a strong recommendation against buying Synology hardware. I am sure their offerings have a target audience. I'd just want to suggest that you are absolutely certain you are in that target audience before biting the bullet. If you're not, you might end up like me and spend more time and effort on it than a custom built NAS would have.

I finished this write-up a few months after I had already sold the Synology. I'd like to follow this post up with what I ended up going with as my replacement for the Synology, in case this can be helpful to some of you out there.

I can already tell you that these past few months of using my custom built NAS have been so much more enjoyable than dealing with the Synology ever was. I should've done this two and a half years ago. Hindsight.

]]>
<![CDATA[Install RPM Fusion that automatically stays up-to-date alongside future Fedora Silverblue releases]]>I'd like to show a slightly Inception looking way you can install RPM Fusion in a way that you don't have to worry about needing to remove-and-reinstall it with every Fedora update.

The issue lies with the "normal" method of installing RPM Fusion being

]]>
https://davejansen.com/install-rpmfusion-automatic-updated-fedora-silverblue/6478108a0632360001532120Thu, 01 Jun 2023 03:40:45 GMT

I'd like to show a slightly Inception looking way you can install RPM Fusion in a way that you don't have to worry about needing to remove-and-reinstall it with every Fedora update.

The issue lies with the "normal" method of installing RPM Fusion being specific to the currently installed version of Fedora. We'll remove this specificity by relying on the repository packages the RPM Fusion repository itself actually provides.

First, let's install both RPM Fusion free and nonfree repositories using their recommended way, if you've not done so already:

❯ sudo rpm-ostree install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Next, reboot your system. This is a required step.

Now, after rebooting, we'll both remove the specific package version we just installed and install the "automatic version matching" versions available from the RPM Fusion repositories, all in one fell swoop:

❯ sudo rpm-ostree update --uninstall $(rpm -q rpmfusion-free-release) --uninstall $(rpm -q rpmfusion-nonfree-release) --install rpmfusion-free-release --install rpmfusion-nonfree-release

And that's it. From now on, your RPM Fusion repositories will correctly update alongside your Fedora updates. Pretty neat, right?

I hope this helps!

]]>
<![CDATA[Hardware-accelerated video playback in Fedora Silverblue]]>More recently, Fedora removed support for hardware-accelerated playback of H.264 and H.265 content. This was done due to the need to avoid licensing complications (and costs) that surround these codecs. While understandable, this is less desirable to those using Fedora and wanting smooth video playback, especially on more

]]>
https://davejansen.com/fedora-silverblue-hardware-accelerated-h264-video-playback/6474695f3c41a200011c2194Mon, 29 May 2023 11:36:38 GMT

More recently, Fedora removed support for hardware-accelerated playback of H.264 and H.265 content. This was done due to the need to avoid licensing complications (and costs) that surround these codecs. While understandable, this is less desirable to those using Fedora and wanting smooth video playback, especially on more limited hardware.

Option one: Flatpak Firefox

If you are not dependent on any extension that does not work with a Flatpak-installed Firefox installation (ie. 1Password), this might be the very best route to take as it's very easy to get up and running. Simply install Firefox through the Software application, or through the command line:

❯ flatpak install org.mozilla.firefox

Or if you prefer to specifically install the version provided by Flathub:

❯ flatpak install flathub org.mozilla.firefox

That's it, you'll now have Firefox with proper hardware acceleration support. If you'd like to remove the system-installed version, you can do this with an override like so:

❯ sudo rpm-ostree override remove firefox

Option two: OpenH264

One way to get back (somewhat limited) H.264 video playback functionality is by installing the mozilla-openh264 package, available from the fedora-cisco-openh264 repository. For the use of this codec Cisco is footing the bill to take care of the legal side of things, and it only adds support for H.264 video playback in Firefox, it does not enable hardware accelerated H.264 video playback system-wide.

The fedora-cisco-openh264 repository is included with Fedora, but disabled by default. We'll have to enable it first by editing /etc/yum.repos.d/fedora-cisco-openh264.repo using your favorite text editor, and setting enabled=1.

Once this is done, you can install it like so:

❯ sudo rpm-ostree install mozilla-openh264 gstreamer1-plugin-openh264
💡
If you're using Fedora 40, you now also have to remove a package called noopenh264. You can do this in one step like so:

❯ sudo rpm-ostree override remove noopenh264 --install openh264 --install gstreamer1-plugin-openh264 --install mozilla-openh264

Reboot once done. Then, open Firefox and enable the "OpenH264" plugin.

That should be it. Give it a try and see how that works.

Option three: Install full ffmpeg

If the above is not adequate for your needs, or if you just want to enjoy the full functionality offered by ffmpeg, we can replace the ffmpeg-free variant Fedora ships with these days with the full one provided by RPM Fusion.

If you've not done so already, please check out my RPM Fusion installation guide on how to install RPM Fusion in a way that works alongside future Fedora Silverblue releases.

Install RPM Fusion that automatically stays up-to-date alongside future Fedora Silverblue releases
I’d like to show a slightly Inception looking way you can install RPM Fusion in a way that you don’t have to worry about needing to remove-and-reinstall it with every Fedora update. The issue lies with the “normal” method of installing RPM Fusion being specific to the currently installed version
Hardware-accelerated video playback in Fedora Silverblue

Installing ffmpeg and mesa-va-drivers-freeworld

Alright, back to ffmpeg. We need to remove some included base packages and replace these with ffmpeg. In order to dot his, we'll need to rely on rpm-ostree's override functionality, like so:

❯ sudo rpm-ostree override remove mesa-va-drivers libavcodec-free libavfilter-free libavformat-free libavutil-free libpostproc-free libswresample-free libswscale-free --install ffmpeg --install mesa-va-drivers-freeworld
💡
If you're using Fedora 40, make sure to also remove the noopenh264 package.

Reboot, and you should now be able to to enjoy fully hardware accelerated video in Firefox and other system-installed applications that may support it.

Closing thoughts

It'd certainly be nice if we didn't have to jump through these hoops to get hardware accelerated video playback working (again), but the real issue lies with these codecs that basically became the defacto standard codecs on the internet being so patent encumbered. We collectively should've never jumped on the H.264 bandwagon the way we did.

But, at the time it offered all kinds of interesting benefits over the open alternatives available at the time. Or at least that that's what the big companies building GPUs and web browsers thought.

These days our hope is aimed directly at AV1, a codec that seems to offer even greater compression, playback quality and more, all while not being patent encumbered.

But until the day comes bigger players like YouTube adopt this codec, our options as end-users are more limited. Fortunately we have a few options available to us, so it's not all bad.

I hope this helps.

Enjoy!

]]>
<![CDATA[Delayed starting of Docker service until expected mounts are available]]>(Or for any other systemctl controlled service, for that matter)

If you've ever run into the situation where you reboot your computer (or virtual machine) and the Docker service kicked in a bit too fast, even before some of your fstab defined mounts are fully available, you might&

]]>
https://davejansen.com/systemctl-delay-start-docker-service-until-mounts-available/647482723c41a200011c2303Mon, 29 May 2023 11:09:58 GMT

(Or for any other systemctl controlled service, for that matter)

If you've ever run into the situation where you reboot your computer (or virtual machine) and the Docker service kicked in a bit too fast, even before some of your fstab defined mounts are fully available, you might've noticed that some of your containers might not know how to deal with expected data being available right from the get-go. A restart of the containers usually fixes this, but to do this manually every time is rather tedious.

💡
2024-06-28 Update: A previous version of this guide mistakenly described adding RequiresMountsFor to the [System] section, which is incorrect as it belongs to the [Unit] section. Thank you very much to the kind reader who emailed to let me know about this mistake!

Fortunately there is a way to fix this by making the docker service wait to start until all desired mounts are ready and available.

This is achieved by adding a docker.service configuration overwrite that adds a prerequisite. This override is added in such a way that subsequent docker updates won't inadvertently undo this. Let's get started.

First, open up your favorite terminal and run the following command on the system that runs your docker service:

❯ sudo systemctl edit docker.service

Your favorite text editor (or whatever is defined in the $EDITOR environment variable, anyway) will automatically open, with the entire docker.service configuration listed out in a comment section. Reading through the comments shown you'll see that you'll be able to add what you need to add between two comment notes near the top.

Add the following in-between these two comments (comments are shown below as a guide, please don't include them in your own config file), substituting the example /mnt/foo and /mnt/bar paths with a space delimited list of all mount paths you want to wait for before starting the docker service:

### Anything between here and the comment below will become the new contents of the file

[Unit]
RequiresMountsFor=/mnt/foo /mnt/bar

### Lines below this comment will be discarded

When that's done, save and close your text editor and that's it! Next time you reboot your system, the Docker service will wait to start until after the mount(s) you have defined here have become available.

(Optional) Adding an timed delay

In some cases you may want to add an additional delay before starting a certain service. This is easily achieved by adding a sleep command with the desired amount of seconds to wait before the actual start of the service using ExecStartPre, like so:

[Service]
ExecStartPre=/bin/sleep 60

Make sure to use the full path to where the sleep command exists on your particular system. Common locations are /usr/bin/sleep and /bin/sleep., this can vary by distribution. The easiest way to check is to run which sleep and seeing what it returns.

This can be used in combination with the mount wait. Normally this wouldn't be something you'd likely need, but it's a handy option to be aware of should you ever need it.


I hope this helps!

]]>
<![CDATA[TrueNAS Fix: Cannot mount 'directory': failed to create mountpoint]]>I've you've at any point used TrueNAS' replication functionality to move or copy data from one dataset to another, you may have run into an issue later on where you're suddenly no longer able to create additional datasets in the root of the

]]>
https://davejansen.com/truenas-fix-cannot-mount-directory-failed-to-create-mountpoint/646192d23c41a200011c211dMon, 15 May 2023 02:17:37 GMT

I've you've at any point used TrueNAS' replication functionality to move or copy data from one dataset to another, you may have run into an issue later on where you're suddenly no longer able to create additional datasets in the root of the destination pool.

This was a bit of a head scratcher for me as as there was no apparent setting or flag that would be causing this. But here is one way that this might be happening, and fortunately the fix is quite easy.

The issue is related to the default settings of replication tasks wanting to make the destination dataset read-only. Not only does this mark the destination dataset as read-only, it too seems to set the immutable file attribute on the pool's root dataset. The former is easy to fix, as TrueNAS has the ZFS read-only option exposed through its GUI, but the latter is not.

💡
This guide was written with TrueNAS Scale in mind. I am not sure if the same exact steps apply with TrueNAS Core too.

To check if this is the issue you are facing, SSH into your TrueNAS Scale machine (Or use the System Settings > Shell option through its Web UI), and run the following command:

❯ lsattr /mnt

If any of the pools show an i in the results, their immutable flag has indeed been set. Here's an example of what that might look like:

❯ lsattr /mnt
---------------------- /mnt/Acacia
---------------------- /mnt/Willow
----i----------------- /mnt/Cottonwood

Removing this flag is as easy as running the following command, substituting the mount path to your own pool of course:

❯ sudo chattr -i /mnt/Cottonwood

And that's it! You should now be able to create additional datasets in this pool, just like before. If you still have the dataset that threw this error listed and throwing errors when you try to select it, be sure to delete it first, then re-create it.

]]>
<![CDATA[Using 1Password with Visual Studio Code Remote SSH and Dev Containers on Linux]]>1Password is a great and powerful tool that helps you use better, more secure passwords for all the accounts, services, and tools you may use.

In October 2020, a beta version of 1Password for Linux was announced, which I was very excited about. 1Password was one of the few tools

]]>
https://davejansen.com/linux-1password-ssh-git-signing-vscode-dev-containers/63467c71ddc4ab000144ee78Thu, 08 Dec 2022 05:19:17 GMT

1Password is a great and powerful tool that helps you use better, more secure passwords for all the accounts, services, and tools you may use.

In October 2020, a beta version of 1Password for Linux was announced, which I was very excited about. 1Password was one of the few tools that I really missed after switching to Linux full-time, so this was music to my ears.

I was using 1Password X up until this point, their browser extension that worked fully independently but had a limited feature set. As soon as all the needed features were in place in the beta – like creating/editing entries, this wasn't actually there initially), I switched over to their app, and I've been using it ever since.


In March of 2022, 1Password announced new features; SSH and Git support, built right into 1Password. This lets you add your SSH key(s) to 1Password, and configure 1Password as your authentication agent for both Git and SSH. With this you can easily sign your Git commits and log in to machines through SSH with the keys stored in your 1Password. What's more, these brand new features were also instantly available in their Linux client. How cool is that?

Using 1Password with Visual Studio Code Remote SSH and Dev Containers on Linux

Setting this up is very easy, as the 1Password app tells you what configuration files to modify, and what to add. It can even do this for you with the click of a button. It's very straight-forward.

If you, like me, use Visual Studio Code with the Remote Development extensions, a few tweaks are needed, but after this you can use this functionality right from within your dev containers even and/or while using remote development servers.

Let's walk through what's needed to make this work.

💡
One important note; As of this writing, the Flatpak release of 1Password sadly does not support this new SSH/Git functionality.While you can store your SSH keys in 1Password, you cannot configure 1Password to be used as the authentication agent. If you want to use this functionality, you'll have to install 1Password directly.

Checklist

Let's make a note of what our current environment looks like:

  • Your Linux distro of choice is up-to-date and supported by 1Password,
  • Your system has Git version 2.35 or newer installed,
  • You have 1Password installed directly, not the Flatpak version,
  • You have Visual Studio Code installed directly, not the Flatpak version, along with the Remote Development extensions, and;
  • You have at least one SSH key added to your 1Password vault.

Alright, let's get started.

Configuring 1Password

In your 1Password application, click on the three dotted menu button (top-left, right next to where it shows your current account selection), and select "Settings..." (or ctrl+, by default). Click on the Developer tab, and enable Use the SSH agent, and Connect with 1Password CLI. Don't let 1Password add its configuration bits when it asks you to, we'll do this manually (and slightly differently) in the next step.

I also recommend enabling Unlock using system authentication service in the Security tab, which will let you unlock 1Password with your system account's password, or something like a fingerprint sensor if you have one that is configured correctly with your system.

Configuring Git

Next, open up a terminal window and run the following command:

❯ git config --global gpg.format ssh

If you are planning on always signing your commits you can set right away:

❯ git config --global commit.gpgsign true
💡
If you've not done this already, now might also be a good time to set your default name and email address, using git config --global user.name "Your Name" and git config --global user.email "[email protected]".

In order to have Git sign your commits with the correct key, we'll set this now, too. Be sure to have already added this key to where you host your repositories (e.g. Github, Gitlab, etc.).

Normally you might point to the key file stored on your system, but in our case we instead set it to the public key of the key you'll be using that you've saved in your 1Password personal vault. The 1Password agent will use this and take care of the rest.

Using 1Password with Visual Studio Code Remote SSH and Dev Containers on Linux

Find your key in 1Password, copy its public key, and then run the following command with this public key pasted in:

❯ git config --global user.signingkey "your public key"

If all went well, your ~/.gitconfig file should now look a little something like this:

[user]
	name = Dave Jansen
	email = i.am at this domain
	signingkey = ssh-ed25519 xxxxxxxxxxxxxx

[gpg]
	format = ssh

[commit]
	gpgsign = true

Configuring SSH

To have SSH use 1Password as the authentication agent, you'll have point the IdentityAgent setting to 1Password. In my case I want to use the 1Password agent for multiple hosts I connect with, so I added the option to all (wildcard) hosts in my ~/.ssh/config file (creating this file if it doesn't already exist). You can limit this only to certain hosts if you prefer. Either way, it should look something like this:

Host *
	IdentityAgent ~/.1password/agent.sock
    
Host my-vscode-server-box
	ForwardAgent yes

Note that I have configured ForwardAgent for certain hosts. This is not necessary, but can be helpful as it allows you to make use of 1Password even when working on a remote (dev) machine. This allows me to sign commits and access other remote machines from the development virtual machine hosted on my TrueNAS machine. Very handy.

Visual Studio Code has this enabled by default, too. Refer to remote.SSH.enableAgentForwarding.

💡
If you have previously configured your ~/.ssh/config file to have specific keys for specific hosts and want to start using 1Password for these, too, you'll have to update these references to point to the public key instead, and have the actual key stored in 1Password.Unfortunately it does not seem like the ssh configuration file supports writing out these public keys in-line, so you'll have to store these public keys on your system, and reference them here.

Configuring your system and Visual Studio Code

The default configuration for VSCode's remote containers is actually quite good, with all its defaults helping out what we want to achieve.

We just need to set one environment variable on a system-wide level (or just for Visual Studio Code, if you prefer) called SSH_AUTH_SOCK, which should also point to 1Password.

How you set this environment variable is dependend on your specific environment. If you're using the (usually default) bash environment, you can add the necessary bits to your ~/.bash_profile file like so:

❯ echo 'export SSH_AUTH_SOCK="$HOME/.1password/agent.sock"' >> ~/.bash_profile

Or if you're using Fish, you can add it as follows:

❯ set -gx SSH_AUTH_SOCK $HOME/.1password/agent.sock

If you are using another environment, please refer to its documentation to find the recommended way to set environment variables. As shown above, you'll want to set the variable called SSH_AUTH_SOCK to have the value of $HOME/.1password/agent.sock.

With this in place, you might have to encourage a reload of environment variables, or just log out and back in to have these changes take effect.

Using 1Password with Visual Studio Code Remote SSH and Dev Containers on Linux

With all these changes in place, you should be able to use 1Password for signing Git commits (and SSH into remote servers) from within Visual Studio Code, even when using remote dev containers. Neat!


As you can probably tell, these configurations are not just limited to Visual Studio Code. You'll be able to sign Git commits via command line too – or indeed any GUI application that correctly supports the relatively newer ssh signing format. I personally ended up adopting the GitLens Visual Studio Code extension into my workflow these days, which works quite nicely with this configuration.

I really like how powerful 1Password's SSH and Git signing functionality is. It lets me more easily log in to the servers I manage for clients, even if I'm not behind my main machine, as 1Password keeps everything nice and synchronized. And with the bits in place, I can do all this even within a dev container, which is really nice.

That's it for this one. If this ended up being useful for you, I would love to hear about it. If you have any recommendations for improvements, I'd love to hear that, too.

Thank you.

]]>
<![CDATA[Correctly set resolution, scale, orientation, and display placement at login with GNOME]]>If you have a multi display setup, make use of scaling, or have a display that defaults to its non-native resolution, you might have run into the situation where your GNOME login screen is not showing up the way you want it to.

While there is no built-in GUI option

]]>
https://davejansen.com/correct-resolution-scale-orientation-placement-at-login-with-gnome/6386e799853b5100015fe5a8Wed, 30 Nov 2022 06:04:07 GMT

If you have a multi display setup, make use of scaling, or have a display that defaults to its non-native resolution, you might have run into the situation where your GNOME login screen is not showing up the way you want it to.

While there is no built-in GUI option to do this, the solution is fortunately quite simple. You just need to copy a single configuration file.

💡
Mazhar Hussain has developed a very nice looking application called GDM Settings that lets you customize a whole bunch of settings, including applying your current user's display settings system-wide (Display > Apply current display settings). It's available as an AppImage through its website and as a Flatpak via Flathub.

Before you copy this file, make sure you set up your display(s) exactly how you want them by logging in to your account and opening up the Displays section in Settings. Configure everything the way you want it to be now before proceeding.

Correctly set resolution, scale, orientation, and display placement at login with GNOME

One note for those of you who might frequently connect and disconnect additional monitor(s); be sure configure everything the way you like at every stage, ensuring your resolution, scaling, and placement is correct both when no additional displays are connected, as-well as when they are. All these individual states are preserved in your monitor configuration, so it'll be good to do this now.


💡
A reader wrote in to let me know that the GDM configuration path may vary depending on your distribution. I've updated my guide to reflect this.

The default path for the GDM configuration directory is /var/lib/gdm/.config, but some distributions may use /var/lib/gdm3/.config instead. Please check what your distribution/configuration uses by checking if your system has the /var/lib/gdm or /var/lib/gdm3 directory.

Once you've established which path your particular system uses, you can now proceed with the command relevant to your particular system.

If your system uses the /var/lib/gdm directory

❯ sudo mkdir -p /var/lib/gdm/.config && sudo cp ~/.config/monitors.xml /var/lib/gdm/.config/monitors.xml

If your system uses the /var/lib/gdm3 directory

❯ sudo mkdir -p /var/lib/gdm3/.config && sudo cp ~/.config/monitors.xml /var/lib/gdm3/.config/monitors.xml

What this does: The first part creates the .config folder if it doesn't already exist. Then the second part copies your user's monitors.xml configuration file over to where your system looks for the system-wide default.

Correctly set resolution, scale, orientation, and display placement at login with GNOME

From now on your system will use whatever resolution(s), orientation, scaling, and display placement you have configured at the login screen, too. Nice, right?


While I'm not sure why this isn't something that's handled more gracefully out of the box, at least it's pretty easy to configure by yourself.

Enjoy!

]]>
<![CDATA[Enable webp image support on Fedora]]>I was surprised to find out just last week that webp images are basically universally supported by web browsers these days, which is something I somehow completely missed up until now. And so last week I went through my entire blog post history and manually replaced each photo and screenshot

]]>
https://davejansen.com/enable-webp-support-fedora-nautilus-preview/62b2d2c77cd7ee00013a2d38Wed, 29 Jun 2022 03:34:24 GMT

I was surprised to find out just last week that webp images are basically universally supported by web browsers these days, which is something I somehow completely missed up until now. And so last week I went through my entire blog post history and manually replaced each photo and screenshot with a webp optimized version (with the exception of a few animated GIFs as I didn't want to recreate those).

While doing this I realized that out of the box Fedora does not actually support showing webp image thumbnails in the file browser (Nautilus) nor does Image Viewer support viewing them.

Fortunately, the fix is very easy; you just need to install a single package:

❯ sudo dnf install webp-pixbuf-loader

Now just restart Nautilus, and that's it!

❯ nautilus -q

You should immediately find webp image thumbnails show up, and Nautilus' preview (ie. pressing space with a file selected) and Image Viewer both working.

Neat!


Fedora Silverblue

Unfortunately there is no straight-forward way to get webp images to fully work under Fedora Silverblue as of this moment.

While Nautilus comes with support for it out of the box as of Fedora Silverblue 37 (so thumbnails show up right), both Sushi (The Nautilus Previewer – what you see when pressing spacebar in Nautilus with a file selected) and Eye of GNOME (Image Viewer) do not.

The latter is supposed to have this built-in by now, but Fedora's build as of this writing (2023-04-05) does not seem to actually have this working. Flathub's release does work already, so you can remove Fedora's build and switch over to Flathub's:

❯ flatpak install --system --reinstall flathub org.gnome.eog

As of 2023-04-05, Sushi (Nautilus Previewer) does seem to now correctly support webp, so nothing has to be done there. Nice!

]]>
<![CDATA[Disable Pipewire Suspend on Idle to avoid audio pops, delays, and white noise]]>In case you're experiencing a sound pop whenever sound is played after a a certain amount of time – or a little while after sound has played. Or perhaps you have more sensitive hearing or headphones/speakers and can notice white noise coming from your computer after a

]]>
https://davejansen.com/disable-wireplumber-pipewire-suspend-on-idle-pops-delays-noise/6277374d6adb9400017a6cfaWed, 22 Jun 2022 06:07:59 GMT

In case you're experiencing a sound pop whenever sound is played after a a certain amount of time – or a little while after sound has played. Or perhaps you have more sensitive hearing or headphones/speakers and can notice white noise coming from your computer after a little while. If you experience something like this, it might be worth considering disabling Pipewire's "Suspend on Idle" feature, which is enabled (and set to 5 seconds) by default.

While there's unfortunately no way of doing this through a GUI, it is fortunately not too challenging to do. Let's get started.

Note: This quick guide is written with the assumption that your system is already fully configured to use Pipewire and Wireplumber for audio. This is the default for Fedora (Since 35 I believe), and an increasing amount of other flavor distributions are switching over too. If your setup is not yet using Wireplumber, this guide might not be for you.

Configuring Wireplumber is seemingly mostly done through LUA scripts, so that's what we'll do here too. Let's start by copying an existing alsa config script over to the right place, after which we can make our own modifications to it:

❯ sudo cp -a /usr/share/wireplumber/main.lua.d/50-alsa-config.lua /etc/wireplumber/main.lua.d/50-alsa-config.lua

When we open up the file we just copied over using your favorite text editor, you'll find something that looks like this (note: I have abbreviated parts of the file contents for legibility):

alsa_monitor.enabled = true

alsa_monitor.properties = {
  -- (...abbreviated for legibility...)
}

alsa_monitor.rules = {
  
  -- (...abbreviated for legibility...)
  
  {
    matches = {
      {
        -- Matches all sources.
        { "node.name", "matches", "alsa_input.*" },
      },
      {
        -- Matches all sinks.
        { "node.name", "matches", "alsa_output.*" },
      },
    },
    apply_properties = {
      --["node.nick"]              = "My Node",
      --["priority.driver"]        = 100,
      --["priority.session"]       = 100,
      --["node.pause-on-idle"]     = false,
      --["resample.quality"]       = 4,
      --["channelmix.normalize"]   = false,
      --["channelmix.mix-lfe"]     = false,
      --["audio.channels"]         = 2,
      --["audio.format"]           = "S16LE",
      --["audio.rate"]             = 44100,
      --["audio.allowed-rates"]    = "32000,96000"
      --["audio.position"]         = "FL,FR",
      --["api.alsa.period-size"]   = 1024,
      --["api.alsa.headroom"]      = 0,
      --["api.alsa.disable-mmap"]  = false,
      --["api.alsa.disable-batch"] = false,
      --["session.suspend-timeout-seconds"] = 5,  -- 0 disables suspend
    },
  },
}

Any line that starts with -- is commented out. Basically this configuration file we just copied over shows what all the default settings are in a way that you can easily un-comment-out the line you care about and modify its value to whatever you want it to be instead of its default.

At the very end of this file, you'll notice the session.suspend-timout-seconds value that, by default, is set to 5 seconds. There's a note there that already mentions what we want to achieve, so let's un-comment-out that line and change its value to read 0 instead:

alsa_monitor.rules = {
  
  -- (...abbreviated for legibility...)
  
  {
    apply_properties = {
      -- (...abbreviated for legibility...)
      ["session.suspend-timeout-seconds"] = 0, -- default is 5
    },
  },
}

And that should do it. You can now restart the wireplumber service and it should immediately pick up on this new configuration:

❯ systemctl --user restart wireplumber

That's it! Not too bad, right? If you've been dealing with these kinds of audio issues, I hope this can help you solve those problems.

Thank you.

]]>
<![CDATA[Synology Surveillance Station Client on Linux]]>If you're using Synology Surveillance Station with security cameras that use H.265 and/or AAC encoding and are running Linux, you might have run into the issue of not being able to actually view your camera feeds in the Surveillance Station web view. Even if you have

]]>
https://davejansen.com/synology-surveillance-station-on-linux/627de4d56adb9400017a6d69Fri, 27 May 2022 09:59:49 GMT

If you're using Synology Surveillance Station with security cameras that use H.265 and/or AAC encoding and are running Linux, you might have run into the issue of not being able to actually view your camera feeds in the Surveillance Station web view. Even if you have configured your browser to fully support these codecs, Synology's web view just won't let you see them.

Synology Surveillance Station Client on Linux

Instead you are told to download their desktop app which, of course, is only available for Windows. Fortunately we have access to fantastic tools these days that let us fix these kinds of situations. We'll be using Bottles to install and use the Synology Surveillance Station Client application.

Before you proceed, please make sure you have Bottles installed and ready to use on your system.

Prerequisites

  • Bottles installed and ready to use on your computer
  • Synology Surveillance Station Client for Windows (64-bit, installer) installation exe downloaded (from here)
  • H.265 and/or AAC enabled on your NAS through the Advanced Media Extensions extension. This is only needed if your cameras actually use these codecs, of course.

Enabling H.265 and/or AAC Codecs on your NAS

With DSM 7.0 onward, Synology changed the way these codecs are being handled, likely due to them changing the way they want to pay for licenses for these codecs. So if your cameras use either or both of these codecs, we have to make sure they are enabled first.

Log in to your Synology NAS using its web UI, and open up Package Manager. In there, look for the Advanced Media Extensions extension. If you didn't already install this previously,  do this now.

Synology Surveillance Station Client on Linux

Click the Open button and a pop-up will show, telling you which of these codecs are enabled. If either or both of them are listed as disabled, enable them now.

That's all we need to do here. You can log out of your NAS and proceed with the installation.


Synology Surveillance Station Client on Linux
-

Set up a bottle

In case you're not familiar with the bottles concept, each application (or game) you wish to use is installed in its own section, away from whatever other applications you might want to install. This way you can ensure each of these applications has the exact prerequisites it needs, without them potentially conflicting with other applications' specific requirements.

So, let's set up a bottle for the Surveillance Station application.

Synology Surveillance Station Client on Linux

In my case I have called it simply Synology, but you're free to call it whatever you like. Make sure to select the Application environment, then click Create in the top-right hand side.

Synology Surveillance Station Client on Linux

Now with this new bottle created, click on it in the list and you'll be greeted with a sidebar showing several sections. Head on over to the Preferences section first, and disable the Use DXVK option. This is key, as the client application will otherwise fail to start correctly.

Synology Surveillance Station Client on Linux

Install the application

Now head back to the Details & Utilities section, and click the Run Executable button. A browse modal will show up. Use this to find the Surveillance Station Installation file you have downloaded previously. Make sure that it's the installer (the .exe version), not the "portable" variant.

Synology Surveillance Station Client on Linux

Walk through the installation steps as you normally would. Installation should be nice and quick.

Once the installation completes and you left the checkboxes at the end enabled, you'll now be greeted with the Synology Surveillance Center application login window.

Synology Surveillance Station Client on Linux

And that's all there is to it. You can now log in and use the app as normal, with full H.265 and AAC support.

The next time you want to launch this application, just start up Bottles, select the Synology bottle you have just created. There should now be a new option called Synology Surveillance Station Client under the Programs header. Click the play button on it's right hand-side, and the app will launch straight away.

Synology Surveillance Station Client on Linux

Closing thoughts

It's a bit unfortunate that we can't just use the web version, but at least it's fairly straight-forward to install and use the desktop application using wonderful tools like Bottles, and underlying technologies like WINE that make it all possible.

I hope this helps you.

Thank you.

Synology Surveillance Station Client on Linux
]]>
<![CDATA[Use git patch to update a deployed project or website]]>Whether you're maintaining an older client website or you have a side project that you just can't afford to buy fancier services for, there will be times where you have to manually deploy an update. For those cases, git's built-in patch functionality might offer

]]>
https://davejansen.com/use-git-patch-to-update-deployed-project/6268b1ef6adb9400017a6c64Wed, 27 Apr 2022 03:31:33 GMT

Whether you're maintaining an older client website or you have a side project that you just can't afford to buy fancier services for, there will be times where you have to manually deploy an update. For those cases, git's built-in patch functionality might offer a fantastic benefit over manually replacing individual files. Let's take a look.

Git offers a really easy way to generate a patch file that contains any and all differences found between two branches. So when you need to make some changes to a site, simply create a branch and make the changes as usual. Then when the time comes to deploy, before you merge and delete this new branch, run the following command to generate a patch file:

❯ git diff main maintenance-branch-name > maintenance-branch.patch

Where main is the name of your main branch and maintenance-branch-name is, you guessed it, the name of the branch you made your changes in.

💡
While the example covers branch to branch comparisons, you can actually also compare branches to specific commits, or specific commits to other specific commits, by referencing their specific hash identifiers. Give it a try :).

In certain cases the project repository might also include other files that you don't need to deploy, like package.json or build artifacts or so. You can exclude any number of files or directories when creating a patch file. Here's an example of this:

❯ git diff main maintenance-branch-name -- . ':!*package.json' ':!*.vscode/*' ':!*.devcontainer/*' > maintenance-branch.patch

In this example I'm excluding package.json, anything inside the .vscode directory, as-well as anything inside the .devcontainer directory. You can repeat  this as many times as needed for each file or directory you wish to exclude.


Now that you have a patch file created, you can upload this to where the project is hosted and apply it. You can use your favorite method of uploading this file, whether that's scp or even plain old ftp, just be sure to place it in the same directory as where the project itself is, too.

On the server we don't actually need git to apply the patch. We'll instead use the patch command, which comes preinstalled on many Linux distros. If your server lacks this tool, check your specific flavor's repository for the relevant package. In many cases this package is simply called patch, but there might be some flavor specific variations here.

Once you have the file on the server and patch ready to go, go to the directory where the project is located using ssh. You can run the following command to do a test run, meaning it'll not actually make any changes to the files but might report back if something is wrong:

❯ patch -p1 --dry-run < maintenance-branch.patch

When you're confident everything looks OK, simply remove the --dry-run part of the above command to apply the changes for real:

❯ patch -p1 < maintenance-branch.patch

Do you have to revert these changes for whatever reason? That's super easy now too using the following command:

❯ patch -p1 -R < maintenance-branch.patch

Pretty neat, right?


It would of course be nice if there was an automated deployment system in place, but that isn't always feasible for all projects. Real life and all its limitations, and all that.

If you have to deal with this kind of situation sometimes, maybe give this method a try next time you have to update a project or website.

I hope it'll help.

Thank you.

]]>
<![CDATA[Configure Fedora with full root level snapshot support]]>https://davejansen.com/fedora-root-snapshot-support/617fc49b06eab1000148a80dMon, 18 Apr 2022 05:28:24 GMT

Whether you've been introduced to Fedora by watching recent YouTube videos on it, have known about it for some time and it's finally time to give it a real try, or you've used it in the past and are eager to get back to all that it offers, this guide might be helpful to you if you're looking to set up BTRFS snapshot support.

Fedora is a fantastic choice with a lot going on for it. Its out of the box experience is wonderful, with most everything just working. It's is a really nice choice for people who want their machine to work and don't want to spend potentially large amounts of time tinkering and bug fixing.

With that said, Fedora's out of the box BTRFS configuration unfortunately leaves a bit to be desired. This is what this guide sets out to solve.

⚠️
2025-10-20: A kind reader has reached out to let me know that with the introduction of DNF5, the existing snapper dnf plugin no longer works. Automated pre- and/or post- snapshots will not work out of the box, unless an updated solution is created (reference).

As I have not used this particular setup myself in years now (I have fully moved over to Silverblue), I don't currently have plans to work on an updated version of this guide. Please keep this in mind when proceeding. Apologies for this.
💡
2023-05-29: This guide has been slightly updated with some additional notes the /boot partition and potential caveats of restoring to (very) old snapshots with kernel versions no longer present your system, and a mention that I am personally no longer using this setup as I have fully switched over to Fedora Silverblue.

Snapshots?

If you're less familiar with what snapshots are or how they can be used; they basically allow your system to make a moment-in-time capture of your drive or directory, allowing you to revert back to that state at a later point if needed. It does this without making a copy of everything, so unlike more traditional backup or clone type solutions, this won't actually fill up your drive super quick or take ages to complete.

BTRFS works by what it calls Copy-on-Write (delightfully abbreviated to CoW. Moo!), which means whenever writes occur it doesn't actually overwrite existing data, it instead creates a new modified copy of the data block elsewhere, and updates the metadata to point to that block instead. This is what BTRFS' snapshot functionality uses to allow for these moments in time to be captured very quickly and, at least initially, take up basically no additional space.

Installing Fedora Workstation

What we're going to walk through will be compatible with all more recent versions of Fedora. I have tested and used this with Fedora 34, 35, as-well as Fedora 36 – which is in beta at the time of this writing.

This guide assumes you are planning on installing Fedora as the only OS to a drive. If you are planning on dual booting with another OS, please ensure you adjust the necessary steps to accommodate this. It might be useful to follow a guide on that specific subject if needed.

With a Fedora live USB stick created, boot up your system and select the install Fedora option when prompted. After selecting your region/language, you'll be greeted with the following screen.

Configure Fedora with full root level snapshot support

This might be one of the least intuitive parts of the Fedora installation, but they're apparently working on an updated installer that should remedy this. Regardless, unless you are planning on dual booting, the steps here are fortunately very straight-forward – UX oddities aside.

Click the Installation Destination option under the System header, in the following screen you'll be able to select which drive you'd like to install Fedora too. If you only have one drive, you might find that it is already selected. You can tell by the white-on-black check mark that appears on the drive.

Configure Fedora with full root level snapshot support

By default the installer will have also selected the "Automatic" storage configuration option. Unless you need to customize things for a dual-boot setup or so, this is the easiest route to take.

In case your drive already has another OS installed, you should tick the "I would like to make additional space available" checkbox. This will let you remove existing partitions, after which the Fedora installer's "automatic" mode can create the necessary partitions.

Once you're ready, click the "Done" button hidden away in the top-right corner of the installer, and then proceed with the installation by clicking the start installation button.

This will take a little bit, after which your system should automatically reboot and bring you to the account setup, and final, step of the Fedora installation process. Follow these final steps until you are greeted with Fedora's nice and clean desktop.

Configure Fedora with full root level snapshot support

Configuring BTRFS

With the installation done and your system booted into a nice and fresh Fedora installation, let's actually make the necessary changes to get our root-level snapshots working the way we want it. We'll also install and configure a tool that automatically creates snapshots before and after dnf upgrades, too.

In case you haven't already, it might be a good idea to update your system first so that everything is fully up-to-date.

For these next few steps we'll mostly be using terminal commands, so let's open up Terminal now.

By default Fedora's installation configures two BTRFS sub-volumes; one for your root drive (/), the other for your home directory (/home). You can confirm this by running the following command:

❯ sudo btrfs subvolume list / | grep "level 5"

You'll see something like this as the result:

❯ sudo btrfs subvolume list / | grep "level 5"
ID 256 gen 36 top level 5 path home
ID 257 gen 36 top level 5 path root

Installing Snapper

Let's proceed. We'll install a utility called snapper, along with its dnf plugin. Run the following command to install both:

❯ sudo dnf install snapper python-dnf-plugin-snapper

With snapper installed, let's create and configure snapshots for the root partition first. Run teh following command to do this:

❯ sudo snapper -c root create-config /

Creating a root-level .snapshots sub-volume

Creating this configuration also creates a snapshots sub-volume. However, this is created as as sub-volume directly under the root partition, which we don't want as it complicates things when attempting to restore a root volume snapshot. By having it as its own, root-level volume, anything we may want to do with/to our root volume will be completely separate from where our snapshots are being stored.

You can see what I mean by listing out the BTRFS sub-volumes again. You'll see something like this:

❯ sudo btrfs subvolume list /
ID 256 gen 43 top level 5 path home
ID 257 gen 58 top level 5 path root
ID 258 gen 25 top level 257 path var/lib/machines
ID 259 gen 58 top level 257 path .snapshots

Note the .snapshots volume with its level of 257. To fix this, let's delete this automatically created sub-volume and create a new one that's at the same level as the root and home sub-volumes. First, let's delete the sub-volume:

❯ sudo btrfs subvolume delete /.snapshots

Now we can re-create it, but at the right level. As you noticed earlier, the default Fedora installation has two sub-volumes. Even though we look at / as the root volume, in this case it actually exists in a sub-volume of its own. The actual root of your drive isn't mounted directly, only the two sub-volumes are.

In order for us to create another sub-volume at this higher level, we need to temporarily mount the real root drive so we can run the appropriate commands from there.

There's several ways to do this. For this guide we'll use the drive's UUID. This is easily discoverable by listing out the contents of your system's fstab file, like so:

❯ cat /etc/fstab

The results will look something like this:

UUID=2271c46d-9093-4373-9b9f-4f4bac3f944f /                       btrfs   subvol=root,compress=zstd:1 0 0
UUID=a83793bc-31dc-4d79-b4b9-adadafdde13b /boot                   ext4    defaults        1 2
UUID=2271c46d-9093-4373-9b9f-4f4bac3f944f /home                   btrfs   subvol=home,compress=zstd:1 0 0

In my example –which is running inside a virtual machine– the main drive's UUID is 2271c46d-9093-4373-9b9f-4f4bac3f944f, you can see how both the / and /home sub-volumes reference this same UUID. Take a look at your own system and make note of the primary drive's UUID. We'll need it for the next step.

Let's create a new empty directory where the snapshots will be mounted, as-well as a temporary directory to which we can mount the drive, and then mount the actual drive to it:

❯ sudo mkdir /mnt/btrfs /.snapshots
❯ sudo mount /dev/disk/by-uuid/2271c46d-9093-4373-9b9f-4f4bac3f944f /mnt/btrfs

Substitute the UUID with your own.

Now with the actual drive mounted, let's cd into it and create the new root-level snapshot sub-volume:

❯ cd /mnt/btrfs
❯ sudo btrfs subvolume create snapshots

Let's confirm that everything looks alright by listing out all sub-volumes. The results should look something like this:

❯ sudo btrfs subvolume list / | grep "level 5"
ID 256 gen 129 top level 5 path home
ID 257 gen 132 top level 5 path root
ID 259 gen 132 top level 5 path snapshots

With this done, we can unmount the root drive again, clean up after ourselves, and continue with the final bits of configuration.

❯ cd ~
❯ sudo umount /mnt/btrfs
❯ sudo rmdir /mnt/btrfs

Using your favorite text editor with sudo permissions, open up /etc/fstab and let's add the new sub-volume. We do this by adding a new line to this file that references the same UUID we used just before, and references the newly created snapshots sub-volume.

The easiest way is to just duplicate the line already in your fstab file for the home sub-volume and changing the mount path as-well as subvol value. The end result should look something like this:

UUID=2271c46d-9093-4373-9b9f-4f4bac3f944f /                       btrfs   subvol=root,compress=zstd:1 0 0
UUID=a83793bc-31dc-4d79-b4b9-adadafdde13b /boot                   ext4    defaults        1 2
UUID=2271c46d-9093-4373-9b9f-4f4bac3f944f /home                   btrfs   subvol=home,compress=zstd:1 0 0
UUID=2271c46d-9093-4373-9b9f-4f4bac3f944f /.snapshots             btrfs   subvol=snapshots,compress=zstd:1 0 0

Save and close the file. We can now try to auto mount everything to make sure everything is working as it should by running:

❯ sudo systemctl daemon-reload
❯ sudo mount -a

(Optional) Creating additional sub-volumes as desired

While this article only specifically covers how to set up root-level snapshot support, it might be worth considering setting up additional sub-volumes while you're at it.

For example, if you use Docker, creating a sub-volume and mounting it to /var/lib/docker would prevent root snapshots from filling up with docker volume data and also ensure that if you do roll back your root to a previous state, you don't lose anything related to your docker containers when doing so.

If you'd like to do this, you can effectively follow the same steps listed above, but instead create a sub-volume named something like docker, and adding a row in your fstab file that mounts it to the path mentioned above.

If you already have existing running containers and volumes, you can temporarily disable Docker, temporarily move all contents of the btrfs folder elsewhere (e.g. sudo mv btrfs{,.bak}), create the sub-volume as described above, and after mounting it move everything from your temporary btrfs folder back to what is now a sub-volume.

Another example might be if you use Steam to install and play games. You could create a sub-volume specifically for the ~/.local/share/Steam directory (assuming you are using a system-installed Steam – the path can be different if you use the Flatpak version), ensuring that reverting to a previous snapshot of your home directory won't make you lose your already downloaded games and save files.

Updating Grub

By default Fedora configures grub to simply reference the top level as the default sub-volume. We need to change this to be able to support root sub-volume rollbacks. First, let's check what the current configuration says:

❯ sudo btrfs subvolume get-default /
ID 5 (FS_TREE)

Recall when listing out the BTRFS sub-volumes that we could see their respective IDs:

❯ sudo btrfs subvolume list / | grep "level 5"
ID 256 gen 129 top level 5 path home
ID 257 gen 132 top level 5 path root
ID 259 gen 132 top level 5 path snapshots

In my example's case, the root sub-volume has an ID of 257. Check your system's IDs and once you've found the correct one for your root sub-volume, update the default value with the following command:

❯ sudo btrfs subvolume set-default 257 /

Now when checking again, it should look something like this:

❯ sudo btrfs subvolume get-default /
ID 257 gen 143 top level 5 path root

Next we need to update the Grub configuration to not specifically reference the root sub-volume by name. Fedora comes with a utility called grubby by default which seems to be the Fedora way of doing this. We want to remove this reference by name so that the default value we have just configured can do its thing:

❯ sudo grubby --update-kernel=ALL --remove-args="rootflags=subvol=root"

With that done, we should now be ready to enjoy root-level snapshots with the ability to rollback.

Let's reboot now before we do anything else.

While the result of this guide is that you indeed have root level snapshot support, thanks to a reader reaching out I realized that the /boot partition (and with it /boot/efi) are not included in these snapshots, as these are not formatted using BTRFS, and so any installed kernel versions are not part of these root-level snapshots. This means that if you attempt to restore an older snapshot that expects a particular Kernel version your system no longer has on the /boot partition, you won't be able to boot it up in the exact way it existed when the snapshot was made.

I'm sure if there is an elegant way to use pre/post hooks to copy contents from these volumes into and from a backup directory that could then be part of BTRFS snapshots, that is a bit outside the scope of this guide. This is more involved than I think is worth the effort, too, as it would only really serve to allow you to restore potentially (very) old snapshots that run kernel versions that have since been removed.

By default Fedora is set up to keep up to three Kernel versions installed. This can be modified to your liking by changing the installonly_limit value in /etc/dnf/dnf.conf, but keep in mind that by default your /boot partition is only assigned about 1GB of space.

Configure Fedora with full root level snapshot support
-

Now every time you install, update, or remove something through dnf, snapshots are automatically created before and after these actions. This includes anything you might install/update/remove through the Software Center GUI application – though not when installing Flatpaks of course.

Here's an example of what my snapper ls results look like after installing 0 A.D.:

❯ sudo snapper ls
 # | Type   | Pre # | Date                            | User | Cleanup | Description              | Userdata
---+--------+-------+---------------------------------+------+---------+--------------------------+---------
0  | single |       |                                 | root |         | current                  |         
1  | pre    |       | Mon 18 Apr 2022 01:32:52 PM KST | root | number  | /usr/bin/dnf install 0ad |         
2  | post   |     1 | Mon 18 Apr 2022 01:33:07 PM KST | root | number  | /usr/bin/dnf install 0ad |         

You'll notice that each snapshot has an ID listed. If you ever need to roll back to a previous state, you can use that ID to pick the state to roll back to. For example, if I want to revert to the state just before installing 0 A.D., I could run the following:

❯ sudo snapper --ambit classic rollback 1

As snapshots are read-only, when rolling back snapper actually creates a new read-writeable snapshot based off of the snapshot you specified, and sets that as the new bootable sub-volume. You can see this by listing out the snapshots after running the above command. It should look something like this:

❯ sudo snapper ls
 # | Type   | Pre # | Date                            | User | Cleanup | Description              | Userdata     
---+--------+-------+---------------------------------+------+---------+--------------------------+--------------
0  | single |       |                                 | root |         | current                  |              
1  | pre    |       | Mon 18 Apr 2022 01:32:52 PM KST | root | number  | /usr/bin/dnf install 0ad |              
2  | post   |     1 | Mon 18 Apr 2022 01:33:07 PM KST | root | number  | /usr/bin/dnf install 0ad |              
3  | single |       | Mon 18 Apr 2022 01:38:52 PM KST | root | number  | rollback backup          | important=yes
4+ | single |       | Mon 18 Apr 2022 01:38:52 PM KST | root |         | writable copy of #1      |

Now when you reboot, your system should be back to exactly what it looked like before you made whatever changes you had made. I realize that my example of installing 0 A.D. isn't a particularly great use-case example, but you can imagine that this could be invaluable when installing something potentially unstable, or accidentally removing critical system level tools for example.

Adding snapshots for home, too

The way BTRFS snapshots work is that they do not include sub-volumes inside other sub-volumes when making snapshots, so your /home directory is not included in snapshots for /.

This is actually a good thing, as it allows us to configure our home directory snapshots separately and in exactly the way we want. What's more, it allows you to revert your system to an earlier snapshot without losing any files stored in your /home. Pretty neat, right?

Let's create another snapper config, this time for your home directory:

❯ sudo snapper -c home create-config /home

Let's add your user to the list of allowed users that are able to manage this configuration, so you don't have to use sudo when interacting with your home snapshots. Here we also enable the SYNC_ACL option which ensures file permissions are set to match whatever we configure through snapper for this particular configuration:

❯ sudo snapper -c home set-config SYNC_ACL=yes ALLOW_USERS=$USER

With that set, you should now be able to interact with snapper for your home directory without requiring sudo. Let's try creating a manual snapshot now:

❯ snapper -c home create --description "Hello, snapshot!"
❯ snapper -c home ls
 # | Type   | Pre # | Date                            | User       | Cleanup | Description      | Userdata
---+--------+-------+---------------------------------+------------+---------+------------------+---------
0  | single |       |                                 | root       |         | current          |         
1  | single |       | Mon 18 Apr 2022 02:07:50 PM KST | davejansen |         | Hello, snapshot! |         

Nice.

Scheduled snapshots

Depending on your personal preferences, you might want to have snapper automatically create scheduled snapshots too. By default a configuration has hourly snapshots set, which we probably don't want for the root volume at least. Let's disable this.

Disabling hourly snapshots for root

Open /etc/snapper/configs/root with your favorite text editor and sudo or root privileges, look for the TIMELINE_CREATE value, and set this to no, so it'll look like this:

# create hourly snapshots
TIMELINE_CREATE="no" 

Save your changes and close the file.

Customizing scheduled home snapshots

For your home directory, keeping hourly snapshots can have some nice benefits, so sticking with this default is probably a good thing. There are several additional settings you can tweak that control the number of hourly, daily, weekly, monthly, and yearly snapshots it wants to preserve. Keep in mind that the higher you set these numbers, the more space will be used by these snapshots over time. Here's one example of what this could look like:

# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="12"
TIMELINE_LIMIT_DAILY="7" 
TIMELINE_LIMIT_WEEKLY="2"
TIMELINE_LIMIT_MONTHLY="6" 
TIMELINE_LIMIT_YEARLY="1"

Adjust these to your liking and save the file. Snapper will automatically pick up these changes.

Enabling scheduled snapshots and cleanup

In order for snapper to be able to run these scheduled tasks we need to enable the appropriate systemd timers:

❯ sudo systemctl enable --now snapper-timeline.timer
❯ sudo systemctl enable --now snapper-cleanup.timer

If you didn't enable any scheduled snapshots and just want the cleanup to happen automatically, you can enable only the second one.


Closing thoughts

We should now have a nice base Fedora setup with full snapshot and rollback support, even on the root level. While Fedora in general is a very stable experience – I've had absolutely no issues so far, it's such a pleasant experience! – there's always the possibility of a rogue tool or driver or configuration causing a ruckus. Having this extra layer of security is very nice for those kinds of cases.

When a new release of Fedora comes out, it's also nice to be able to upgrade your system and know that if anything breaks and either can't yet be fixed or you just don't have the time/interest to investigate, you're able to roll back to before the upgrade and continue with your working system, leaving that problem for another day.

I ran this exact configuration on both my main machine as-well as my laptop, and it was working great while I was using it. However, I am no longer using this particular setup, as I have personally fully moved over to using Fedora Silverblue on all my systems, where you basically get this out of the box, albeit in a different way of course.

If you are using Workstation and followed along with this guide, I do hope this has been helpful to you. It know it might look a bit daunting with all these commands you have to run, but my hope is that I've written it out in an easy enough to follow way, with mostly copy/past-able commands.

I know that one day this guide will no longer be necessary as Fedora's longer term plans are to effectively move completely over to a Silverblue-esque immutable file system approach. But for now, at least, you'll be able to enjoy Workstation with root-level snapshots support.

I hope you'll enjoy your Fedora system!

Thank you.

]]>