Jekyll2026-03-04T12:23:13+00:00https://abbbi.github.io//feed.xmlMichael Ablassmeier..pbsindex - file backup index2026-03-03T00:00:00+00:002026-03-03T00:00:00+00:00https://abbbi.github.io//pbsindexIf you take backups using the proxmox-backup-client and you wondered what backup may include a specific file, the only way to find out is to mount the backup and search for the files.

For regular file backups, the Proxmox Backup Server frontend provides a pcat1 file for download, whose binary format is somewhat undocumented but actually includes a listing of the files backed up.

A Proxmox backup server datastore includes the same pcat1 file as blob index (.pcat1.didx). So to actually beeing able to tell which backup contains which files, one needs to:

1) Open the .pcat1.didx file and find out required blobs, see format documentation

2) Reconstruct the .pcat1 file from the blobs

3) Parse the pcat1 file and output the directory listing.

I’ve implemented this in pbsindex which lets you create a central file index for your backups by scanning a complete PBS datastore.

Lets say you want to have a file listing for a specific backup, use:

 pbsindex --chunk-dir /backup/.chunks/ /backup/host/vm178/2026-03-02T10:47:57Z/catalog.pcat1.didx
 didx uuid=7e4086a9-4432-4184-a21f-0aeec2b2de93 ctime=2026-03-02T10:47:57Z chunks=2 total_size=1037386
 chunk[0] start=0 end=344652 size=344652 digest=af3851419f5e74fbb4d7ca6ac3bc7c5cbbdb7c03d3cb489d57742ea717972224
 chunk[1] start=344652 end=1037386 size=692734 digest=e400b13522df02641c2d9934c3880ae78ebb397c66f9b4cf3b931d309da1a7cc
 d ./usr.pxar.didx
 d ./usr.pxar.didx/bin
 l ./usr.pxar.didx/bin/Mail
 f ./usr.pxar.didx/bin/[ size=55720 mtime=2025-06-04T15:14:05Z
 f ./usr.pxar.didx/bin/aa-enabled size=18672 mtime=2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-exec size=18672 mtime=2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-features-abi size=18664 mtime=2025-04-10T15:06:25Z
 l ./usr.pxar.didx/bin/apropos

It also lets you scan a complete datastore for all existing .pcat1.didx files and store the directory listings in a SQLite database for easier searching.

]]>
libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN2025-12-03T00:00:00+00:002025-12-03T00:00:00+00:00https://abbbi.github.io//libvirt11As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.

According to the documentation “It instructs libvirt to avoid termination of the VM if the guest OS shuts down while the backup is still running. The VM is in that scenario reset and paused instead of terminated allowing the backup to finish. Once the backup finishes the VM process is terminated.”

Added support for this in virtnbdbackup 2.40.

]]>
building SLES 16 vagrant/libvirt images using guestfs tools2025-11-19T00:00:00+00:002025-11-19T00:00:00+00:00https://abbbi.github.io//slevagrantSLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that’s not the case anymore, as with more recent SLES15 releases the official images were gone.

In the past, it was possible to clone existing projects on the opensuse build service to build the images by yourself, but i couldn’t find any templates for SLES 16.

Naturally, there are several ways to build images, and the tooling around involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing packer recipes wont work anymore, as Yast has been replaced by a new installer, called agma). All pretty complicated, …

So my current take on creating a vagrant image for SLE16 has been the following:

  • Spin up an QEMU virtual machine
  • Manually install the system, all in default except for one special setting: In the Network connection details, “Edit Binding settings” and set the Interface to not bind a particular MAC address or interface. This will make the system pick whatever network device naming scheme is applied during boot.
  • After installation has finished, shutdown.

Two guestfs-tools that can now be used to modify the created qcow2 image:

  • run virt-sysrpep on the image to wipe settings that might cause troubles:
 virt-sysprep -a sles16.qcow2
  • create a simple shellscript that setups all vagrant related settings:
#!/bin/bash
useradd vagrant
mkdir -p /home/vagrant/.ssh/
chmod 0700 /home/vagrant/.ssh/
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/
# apply recommended ssh settings for vagrant boxes
SSHD_CONFIG=/etc/ssh/sshd_config.d/99-vagrant.conf
if [[ ! -d "$(dirname ${SSHD_CONFIG})" ]]; then
    SSHD_CONFIG=/etc/ssh/sshd_config
    # prepend the settings, so that they take precedence
    echo -e "UseDNS no\nGSSAPIAuthentication no\n$(cat ${SSHD_CONFIG})" > ${SSHD_CONFIG}
else
    echo -e "UseDNS no\nGSSAPIAuthentication no" > ${SSHD_CONFIG}
fi
SUDOERS_LINE="vagrant ALL=(ALL) NOPASSWD: ALL"
if [ -d /etc/sudoers.d ]; then
    echo "$SUDOERS_LINE" >| /etc/sudoers.d/vagrant
    visudo -cf /etc/sudoers.d/vagrant
    chmod 0440 /etc/sudoers.d/vagrant
else
    echo "$SUDOERS_LINE" >> /etc/sudoers
    visudo -cf /etc/sudoers
fi
 
mkdir -p /vagrant
chown -R vagrant:vagrant /vagrant
systemctl enable sshd
  • use virt-customize to upload the script into the qcow image:
 virt-customize -a sle16.qcow2 --upload vagrant.sh:/tmp/vagrant.sh
  • execute the script via:
 virt-customize -a sle16.qcow2 --run-command "/tmp/vagrant.sh"

After this, use the create-box.sh from the vagrant-libvirt project to create an box image:

https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh

and add the image to your environment:

 create_box.sh sle16.qcow2 sle16.box
 vagrant box add --name my/sles16 test.box

the resulting box is working well within my CI environment as far as i can tell.

]]>
qmpbackup and proxmox 92025-09-12T00:00:00+00:002025-09-12T00:00:00+00:00https://abbbi.github.io//pve9-qmpbackupThe latest Proxmox release introduces a new Qemu machine version that seems to behave differently for how it addresses the virtual disk configuration.

Also, the regular “query-block” qmp command doesn’t list the created bitmaps as usual.

If the virtual machine version is set to “9.2+pve”, everything seems to work out of the box.

I’ve released Version 0.50 with some small changes so its compatible with the newer machine versions.

]]>
Vagrant images for trixie2025-09-08T00:00:00+00:002025-09-08T00:00:00+00:00https://abbbi.github.io//vagrantIt’s no news that the vagrant license has changed while ago, which resulted in less motivation to maintain it in Debian (understandably).

Unfortunately this means there are currently no official vagrant images for Debian trixie, for reasons

Of course there are various boxes floating around on hashicorp’s vagrant cloud, but either they do not fit my needs (too big) or i don’t consider them trustworthy enough…

Building the images using the existing toolset is quite straight forward. The required scripts are maintained in the Debian Vagrant images repository.

With a few additional changes applied and following the instructions of the README, you can build the images yourself.

For me, the built images work like expected.

]]>
PVE 9.0 - Snapshots for LVM2025-08-05T00:00:00+00:002025-08-05T00:00:00+00:00https://abbbi.github.io//pve9The new Proxmox release advertises a new feature for easier snapshot handling of virtual machines whose disks are stored on LVM volumes, I wondered.. whats the deal..?

To be able to use the new feature, you need to enable a special flag for the LVM volume group. This example shows the general workflow for a fresh setup.

1) Create the volume group with the snapshot-as-volume-chain feature turned on:

 pvesm add lvm lvmthick --content images --vgname lvm --snapshot-as-volume-chain 1

2) From this point on, you can create virtual machines right away, BUT those virtual machines disks must use the QCOW image format for their disk volumes. If you use the RAW format, you wont be able to create snapshots, still.

 VMID=401
 qm create $VMID --name vm-lvmthick
 qm set $VMID -scsi1 lvmthick:2,format=qcow2

So, why would it make sense to format the LVM volume as QCOW?

Snapshots on LVM thick provisioned devices are, as everybody knows, a very I/O intensive task. Besides each snapshot, a special -cow Device is created that tracks the changed block regions and the original block data for each change to the active volume. This will waste quite some space within your volume group for each snapshot.

Formatting the LVM volume as QCOW image, makes it possible to use the QCOW backing-image option for these devices, this is the way PVE 9 handles these kind of snapshots.

Creating a snapshot looks like this:

 qm snapshot $VMID id
 snapshotting 'drive-scsi1' (lvmthick3:vm-401-disk-0.qcow2)
 Renamed "vm-401-disk-0.qcow2" to "snap_vm-401-disk-0_id.qcow2" in volume group "lvm"
 Rounding up size to full physical extent 1.00 GiB
 Logical volume "vm-401-disk-0.qcow2" created.
 Formatting '/dev/lvm/vm-401-disk-0.qcow2', fmt=qcow2 cluster_size=131072 extended_l2=on preallocation=metadata compression_type=zlib size=1073741824 backing_file=snap_vm-401-disk-0_id.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16

So it will rename the current active disk and create another QCOW formatted LVM volume, but pointing it to the snapshot image using the backing_file option.

Neat.

]]>
libvirt - incremental backups for raw devices2025-07-31T00:00:00+00:002025-07-31T00:00:00+00:00https://abbbi.github.io//datafileSkimming through the latest libvirt releases, to my surprise, i found that latest versions (>= v10.10.0) have added support for the QCOW data-file setting.

Usually the incremental backup feature using bitmaps was limited to qcow2 based images, as there was no way to store the bitmaps persistently within raw devices. This basically ruled out proper incremental backups for direct attached luns, etc.

In the past, there were some discussions how to implement this, mostly by using a separate metadata qcow image, holding the bitmap information persistently.

These approaches have been discussed again lately and required features were implemented

In order to be able to use the feature, you need to configure the virtual machines and its disks in a special way:

Lets assume you have a virtual machine that uses a raw device /tmp/datafile.raw

1) Create an qcow image (same size as the raw image):

 # point the data-file to a temporary file, as create will overwrite whatever it finds here
 qemu-img create -f qcow2 /tmp/metadata.qcow2 -o data_file=/tmp/TEMPFILE,data_file_raw=true ..
 rm -f /tmp/TEMPFILE

2) Now use the amend option to point the qcow image to the right raw device using the data-file option:

 qemu-img amend /tmp/metadata.qcow2 -o data_file=/tmp/datafile.raw,data_file_raw=true

3) Reconfigure the virtual machine configuration to look like this:

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='native' discard='unmap'/>
      <source file='/tmp/metadata.qcow2'>
        <dataStore type='file'>
          <format type='raw'/>
          <source file='/tmp/datafile.raw'/>
        </dataStore>
      </source>
      <target dev='vda' bus='virtio'/>
    </disk>

Now its possible to create persistent checkpoints:

 virsh checkpoint-create-as vm6 --name test --diskspec vda,bitmap=test
 Domain checkpoint test created

and the persistent bitmap will be stored within the metadata image:

 qemu-img info  /tmp/tmp.16TRBzeeQn/vm6-sda.qcow2
 [..]
    bitmaps:
        [0]:
            flags:
                [0]: auto
            name: test
            granularity: 65536

Hoooray.

Added support for this in virtnbdbackup v2.33

]]>
qmpbackup 0.46 - add image fleecing2025-04-01T00:00:00+00:002025-04-01T00:00:00+00:00https://abbbi.github.io//fleeceI’ve released qmpbackup 0.46 which now utilizes the image fleecing technique for backup.

Usually, during backup, Qemu will use a so called copy-before-write filter so that data for new guest writes is sent to the backup target first, the guest write blocks until this operation is finished.

If the backup target is flaky, or becomes unavailable during backup operation, this could lead to high I/O wait times or even complete VM lockups.

To fix this, a so called “fleecing” image is introduced during backup being used as temporary cache for write operations by the guest. This image can be placed on the same storage as the virtual machine disks, so is independent from the backup target performance.

The documentation on which steps are required to get this going, using the Qemu QMP protocol is, lets say.. lacking..

The following examples show the general functionality, but should be enhanced to use transactions where possible. All commands are in qmp-shell command format.

Lets start with a full backup:

# create a new bitmap
block-dirty-bitmap-add node=disk1 name=bitmap persistent=true
# add the fleece image to the virtual machine (same size as original disk required)
blockdev-add driver=qcow2 node-name=fleecie file={"driver":"file","filename":"/tmp/fleece.qcow2"}
# add the backup target file to the virtual machine
blockdev-add driver=qcow2 node-name=backup-target-file file={"driver":"file","filename":"/tmp/backup.qcow2"}
# enable the copy-before-writer for the first disk attached, utilizing the fleece image
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie
# "blockdev-replace": make the copy-before-writer filter the major device (use "query-block" to get path parameter value, qdev node)
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter backing the copy-before-writer
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# create a full backup
blockdev-backup device=snapshot-backup-source target=backup-target-file sync=full job-id=test

[ wait until block job finishes]

# remove the snapshot access filter from the virtual machine
blockdev-del node-name=snapshot-backup-source
# switch back to the regular disk
qom-set path=/machine/unattached/device[20] property=drive value=node-disk1
# remove the copy-before-writer
blockdev-del node-name=cbw
# remove the backup-target-file
blockdev-del node-name=backup-target-file
# detach the fleecing image
blockdev-del node-name=fleecie

After this process, the temporary fleecing image can be deleted/recreated. Now lets go for a incremental backup:

# add the fleecing and backup target image, like before
blockdev-add driver=qcow2 node-name=fleecie file={"driver":"file","filename":"/tmp/fleece.qcow2"}
blockdev-add driver=qcow2 node-name=backup-target-file file={"driver":"file","filename":"/tmp/backup-incremental.qcow2"}
# add the copy-before-write filter, but utilize the bitmap created during full backup
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie bitmap={"node":"disk1","name":"bitmap"}
# switch device to the copy-before-write filter
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# merge the bitmap created during full backup to the snapshot-access device so
# the backup operation can access it. (you better use an transaction here)
block-dirty-bitmap-add node=snapshot-backup-source name=bitmap
block-dirty-bitmap-merge node=snapshot-backup-source target=bitmap bitmaps=[{"node":"disk1","name":"bitmap"}]
# create incremental backup (you better use an transaction here)
blockdev-backup device=snapshot-backup-source target=backup-target-file job-id=test sync=incremental bitmap=bitmap

 [ wait until backup has finished ]
 [ cleanup like before ]

# clear the dirty bitmap (you better use an transaction here)
block-dirty-bitmap-clear node=disk1 name=bitmap

Or, use a simple reproducer by directly passing qmp commands via stdio:

#!/usr/bin/bash
qemu-img create -f raw disk 1M
qemu-img create -f raw fleece 1M
qemu-img create -f raw backup 1M
qemu-system-x86_64 -drive node-name=disk,file=disk,format=file -qmp stdio -nographic -nodefaults <<EOF
{"execute": "qmp_capabilities"}
{"execute": "block-dirty-bitmap-add", "arguments": {"node": "disk", "name": "bitmap"}}
{"execute": "blockdev-add", "arguments": {"node-name": "fleece", "driver": "file", "filename": "fleece"}}
{"execute": "blockdev-add", "arguments": {"node-name": "backup", "driver": "file", "filename": "backup"}}
{"execute": "blockdev-add", "arguments": {"node-name": "cbw", "driver": "copy-before-write", "file": "disk", "target": "fleece", "bitmap": {"node": "disk", "name": "bitmap"}}}
{"execute": "query-block"}
{"execute": "qom-set", "arguments": {"path": "/machine/unattached/device[4]", "property": "drive", "value": "cbw"}}
{"execute": "blockdev-add", "arguments": {"node-name": "snapshot", "driver": "snapshot-access", "file": "cbw"}}
{"execute": "block-dirty-bitmap-add", "arguments": {"node": "snapshot", "name": "tbitmap"}}
{"execute": "block-dirty-bitmap-merge", "arguments": {"node": "snapshot", "target": "tbitmap", "bitmaps": [{"node": "disk", "name": "bitmap"}]}}
[..]
{"execute": "quit"}
EOF
]]>
pbsav - scan backups on proxmox backup server via clamav2025-03-01T00:00:00+00:002025-03-01T00:00:00+00:00https://abbbi.github.io//pbsavLittle side project this weekend:

pbsav

Small utility to scan virtual machine backups on PBS via clamav.

]]>
proxmox backup nbdkit plugin round 22025-02-28T00:00:00+00:002025-02-28T00:00:00+00:00https://abbbi.github.io//nbdkit2I re-implemented the proxmox backup nbdkit plugin in C.

It seems golang shared libraries don’t play well with programs that fork().

As a result, the Plugin was only usable if nbdkit was run in foreground mode (-f), making it impossible to use nbdkit’s’ captive modes, which are quite useful.. Lessons learned.

Here is the C version

]]>