Deploy Arch Linux as an immutable, image-based OS on Hetzner Cloud VPS servers using bootc (bootable containers). Build your OS as an OCI container image, flash it to a Hetzner server, and manage upgrades atomically with built-in rollback.
Based on bootcrew/arch-bootc with systemd-boot.
- Immutable infrastructure - your server OS is defined in a
Containerfile, versioned in git, and reproducible - Atomic upgrades -
bootc upgradestages a new image; reboot switches to it; previous image stays as rollback - No manual package management - packages are baked into the image at build time
- Hetzner-ready - scripts handle UEFI boot, EFI partition setup, disk flashing from rescue mode, and network config
- OCI-native - the OS image is a standard container image stored in any OCI registry
- Arch Linux running as a bootable OCI container on Hetzner Cloud
- Automated disk image generation and flashing via rescue mode
- systemd-boot with automatic kernel and EFI sync
- Atomic OS upgrades with rollback via
bootc upgrade - k3s (lightweight Kubernetes) with systemd service
- Tailscale VPN integration
- UFW firewall with cloud-safe defaults
- SSH key-only authentication, root login disabled
- Pre-configured dev environment (Go, Node.js, Neovim, tmux, zsh)
Containerfile.base # bootcrew/arch-bootc base (compiles bootc from source)
Containerfile # Your custom image (packages, users, dotfiles, SSH keys)
scripts/
build.sh # Build and push container image to registry
generate-disk.sh # Generate bootable.img and push via oras
flash-disk.sh # Flash disk from Hetzner rescue mode (handles EFI fix)
verify-disk.sh # Verify disk setup before rebooting
post-boot.sh # Post-boot verification and setup
Fork this repo and edit the Containerfile:
- Replace the SSH public key with yours
- Change the username from
bupdto yours - If you change the default username, update any matching paths/users in boot-time services such as
files/tmux-main.service - Update the git config
- Add/remove packages as needed
- Update or remove the dotfiles clone
# One-liner: build base + custom image and push to registry
./scripts/build.sh <registry/repo> <username> <password>
# Example
./scripts/build.sh registry.example.com/myuser/bootc myuser mypassword
# Or store BOOTC_REGISTRY / BOOTC_USERNAME / BOOTC_PASSWORD in .env
cp .env.example .env
./scripts/build.shThis takes 20-40 minutes (compiles bootc from source with Rust).
# Generate disk image, compress, and push to registry via oras
./scripts/generate-disk.sh <registry/repo> <username> <password>
# Or reuse BOOTC_* values from .env
./scripts/generate-disk.shThe compressed disk image (~1.7 GiB) gets pushed as <registry>:disk-latest.
From the Hetzner Cloud Console:
- Enable rescue mode (Rescue tab, choose linux64)
- Power Cycle the server (Power tab)
- SSH into rescue:
ssh root@<server-ip>
Then run the flash script:
curl -sL https://raw.githubusercontent.com/bupd/arch-bootc-hetzner/main/scripts/flash-disk.sh -o flash-disk.sh
chmod +x flash-disk.sh
./flash-disk.sh <registry/repo> <username> <password>This script:
- Pulls the compressed disk image via oras
- Writes it to
/dev/sdawith dd - Resizes the root partition to fill the disk
- Installs systemd-boot EFI binary (fixes a known bootc issue)
- Copies kernel, initramfs, and loader config to the ESP
curl -sL https://raw.githubusercontent.com/bupd/arch-bootc-hetzner/main/scripts/verify-disk.sh -o verify-disk.sh
chmod +x verify-disk.sh
./verify-disk.shChecks: EFI bootloader, loader config, kernel/initramfs, UUID match, SSH keys, networkd, partition size.
- Disable rescue mode (Hetzner Console, Rescue tab)
- Power Cycle (Power tab)
- Wait 1-2 minutes
ssh <your-user>@<server-ip>
./scripts/post-boot.shThen manually:
# Re-authenticate tailscale
sudo tailscale up
# Import GPG key (for git signing)
gpg --import your-private-key.asc
# Change hostname
sudo hostnamectl set-hostname <new-name>After updating the Containerfile:
# Rebuild and push
./scripts/build.sh <registry/repo> <username> <password>
# On the server
sudo bootc upgrade
sudo rebootThe image now keeps the ESP in sync twice:
- on every successful boot (
bootc-sync-esp.service) as a repair path - on shutdown after
ostree-finalize-staged.service(bootc-sync-esp-finalize.service) so staged upgrades copy the correct loader state before the next boot
The previous image remains as a rollback entry in systemd-boot.
bootc install to-disk --bootloader systemd places the systemd-boot binary and loader config on the root partition, but UEFI firmware looks for them on the EFI System Partition (ESP). The flash-disk.sh script handles this automatically by copying BOOTX64.EFI, loader entries, kernel, and initramfs to the ESP.
See: bootc-dev/bootc#865
OSTree staged deployments delay bootloader updates until shutdown in ostree-finalize-staged.service. Syncing the ESP immediately after bootc upgrade is therefore too early and can leave the ESP with stale loader entries or the wrong default entry. This repo now syncs the ESP after OSTree finalization and preserves the exact loader.conf generated under /boot instead of guessing a default entry.
See:
If the machine drops into emergency mode with an error like couldn't find specified OSTree root, repair the ESP from Hetzner rescue mode instead of reflashing:
curl -sL https://raw.githubusercontent.com/bupd/arch-bootc-hetzner/main/scripts/repair-esp.sh -o repair-esp.sh
chmod +x repair-esp.sh
./repair-esp.sh /dev/sda
./verify-disk.sh /dev/sda
rebootThis rebuilds the ESP from the installed root filesystem's active /boot/loader* state.
bootc officially targets Fedora/CentOS. Arch support is community-maintained via bootcrew/arch-bootc. Key workarounds:
- Uses systemd-boot instead of GRUB (Arch GRUB lacks BLS support)
- Builds bootc from source (AUR package is incomplete)
- Relocates
/varto/usr/lib/sysimagefor pacman compatibility
- Hetzner Cloud VPS uses UEFI (not legacy BIOS)
- Default disk is
/dev/sda(305 GiB QEMU HARDDISK) - Rescue mode is Debian-based with oras/zstd available via apt
- VNC console (Hetzner Console button) is the escape hatch if SSH breaks
qemu-guest-agentis included for Hetzner integration (graceful shutdown, IP reporting)
base-devel, bind, bind-tools, buildah, btop, chromium, crane, curl, fastfetch, fd, fzf, gcc, git, github-cli, gnupg, go, gopls, grub, helm, htop, iproute2, jq, k3s, k9s, kubectl, lazygit, less, lsof, lua, luarocks, make, man-db, mosh, neovim, net-tools, nodejs, npm, openssh, podman-compose, podman-docker, python, qemu-full, qemu-guest-agent, ripgrep, rsync, skopeo, stow, sudo, tailscale, tmux, traceroute, tree, ufw, unzip, vim, wget, yq, zsh
sshd, systemd-networkd, systemd-resolved, systemd-timesyncd, tailscaled, qemu-guest-agent, ufw, ensure-mosh-firewall, ensure-homebrew, tmux-main, k3s, bootc-sync-esp
- Claude Code (
claude) - OpenAI Codex CLI (
codex) - GitHub CLI (
gh) - fastfetch, btop
- Homebrew (
brew) via/home/linuxbrew/.linuxbrew dockeris provided bypodman-docker, and/etc/containers/nodockeris precreated to suppress Podman's compatibility banner
mosh-serveris installed in the image via themoshpackage- UFW allows UDP
60000:61000for mosh sessions ensure-mosh-firewall.servicere-applies the Mosh UFW rule at boot if it is missing from either IPv4 or IPv6 rulestmux-main.servicecreates a detachedmaintmux session forbupdon every boot- Standard Mosh usage is still
mosh <user>@<server-ip>becausemosh-serveris launched per connection over SSH - Upstream
mosh-serveris intentionally not run as a boot-persistent daemon because it exits if no client connects within 60 seconds - After login, attach to the boot-created workspace with
tmux attach -t main
- Homebrew is installed into
/home/linuxbrew/.linuxbrewto keep it in the writable user space layer /etc/profile.d/homebrew.shexposesbrewin login shells without relying on a user-specific shell rc fileensure-homebrew.servicebootstraps Homebrew and applies the managed Brewfile on boot- The managed Brewfile currently installs
opencode
- Root login disabled
- Password auth disabled (key-only SSH)
- Wheel group with passwordless sudo
- UFW firewall enabled
- bootcrew/arch-bootc for the base Arch bootc image
- bootc for image-based Linux
- oras for OCI artifact distribution
- Yorick Peterse's blog post for inspiration