Practical ZFS - Latest topics https://discourse.practicalzfs.com/latest Latest topics Tue, 21 Apr 2026 14:30:40 +0000 Prevent syncoid from attempting mountpoint on target Sanoid I use syncoid --no-privilege-elevation and occasionally I see cannot mount '/mnt/backup-tank/my-dataset': failed to create mountpoint: Permission denied in the output.

The user on the target indeed doesn’t have ZFS mount permissions. I think it shouldn’t need them because the target is a backup server and if I need anything from the backup I would only then mount it manually or just zfs send it back to the production server. Or am I missing something?

If it’s fine to not allow ZFS mount, how can I prevent syncoid from attempting to mount the dataset on the target? I think what it does is it’s looking at the source properties and tries to replicate them on the target, which includes whether it’s mounted or not, but in normal operation I only want the dataset to be mounted on the source and I don’t need it to be mounted on the target.

Thanks for reading!

5 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/prevent-syncoid-from-attempting-mountpoint-on-target/4458 Tue, 21 Apr 2026 14:30:40 +0000 No No No discourse.practicalzfs.com-topic-4458 Prevent syncoid from attempting mountpoint on target
Recommended upgrade cadence for sanoid / syncoid Sanoid Beginner question: What upgrade cadence is recommended for sanoid and syncoid?

It looks from the release history like the “major” versions are those numbered 2.x.0; is it reasonable to wait to upgrade for a 2.x.0 release? Or is it better to keep up-to-date with the point releases in between?

My usage of sanoid is very basic: just regular generation of hourly, daily, and monthly snapshots, locally on my ZFS machine. (I have no friends or family with a ZFS machine with whom I could set up mutual off-site backup (unless I build them one, which may happen someday).)

Thanks.

3 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/recommended-upgrade-cadence-for-sanoid-syncoid/4457 Sat, 18 Apr 2026 15:47:22 +0000 No No No discourse.practicalzfs.com-topic-4457 Recommended upgrade cadence for sanoid / syncoid
ZFS Replication Compression options? OpenZFS I’ll admit it. I have painted myself into a corner.

I’m currently using a Hetzner “Auction server” with 2 4T disks hanging off of it as my backup destination, and it works great.

But I expect that my source data sets will grow through the years. It’s the whole reason I switched to a “real” NAS because my 4T NAS was constantly filling up.

So my Hetzner server has ~500GB free, and I am on borrowed time.

With my old Synology, their “hyperbackup” tool would compress the source data down dramatically on the destination. the ~ 4T if memory serves on the source compressed down to less than 1T in Backblaze B2.

Are there any magic levers I can pull to achieve the same thing with ZFS? I already set the compression on the backup zpool to zstd.

root@rescue /backuppool # zpool list -v
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backuppool  3.62T  3.07T   569G        -         -     6%    84%  1.00x    ONLINE  -
  mirror-0  3.62T  3.07T   569G        -         -     6%  84.7%      -    ONLINE
    sda     3.64T      -      -        -         -      -      -      -    ONLINE
    sdb     3.64T      -      -        -         -      -      -      -    ONLINE
root@rescue /backuppool #

8 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/zfs-replication-compression-options/4456 Thu, 16 Apr 2026 21:53:07 +0000 No No No discourse.practicalzfs.com-topic-4456 ZFS Replication Compression options?
Best Way to Mix Different Model SATA SSDs of Same Capacity in Mirror Pool? OpenZFS Hello,

I’ve got 4x 2TB enterprise SSDs that I want to put into a 2 vdev mirror pool. The pool will be given to Proxmox Backup Server as a storage target.

One pair is a set of Micron 5300 Pros, and the other is a read-intensive Samsung pair of Samsung PM883s. On paper, they’re both rated for the same (or nearly the same) sequential read/write, though that’s certainly not a reliable metric.

However, the IOPS are quite different.

So, the Samsungs are faster, but for Proxmox Backup Server’s chunk-comparison-based I/O, I’m not sure how much that actually matters.

Question (See, I was going somewhere with this!): Is there any value in mixing the drive types for each vdev? That is, using a Micron and a Samsung in each vdev, versus having a Samsung vdev and a Micron vdev.

My instinct is that mixing them would force both vdevs to perform at the Micron’s slower speeds.

I haven’t used a pool yet where whole vdevs have entirely different drive types in them, so I’m not sure what the impact of that would be.

Either arrangement would be sufficient. I’m just curious what to expect in terms of performance.

7 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/best-way-to-mix-different-model-sata-ssds-of-same-capacity-in-mirror-pool/4453 Sun, 12 Apr 2026 04:15:13 +0000 No No No discourse.practicalzfs.com-topic-4453 Best Way to Mix Different Model SATA SSDs of Same Capacity in Mirror Pool?
Managing syncoid snapshots with multiple destinations Sanoid Good evening,
I have syncoid backups going to more than one destination and as a consequence I see

could not find any snapshots to destroy; check snapshot names.
WARNING:   zfs destroy 'pool/data/set'@syncoid_host_timestamp;  zfs destroy 'pool/data/set'@syncoid_host_timestamp failed: 256 at /sbin/syncoid line 1380.

Or worse yet

CRITICAL ERROR: Target pool/data/set exists but has no snapshots matching with pool/data/set!

I dug into that a bit and uncovered the issue. In short, the syncoid operation to the second target resulted in removing the snapshots created when sending to the first target. Adding:

  • --identifier=,EXTRA/ can be used to disambiguate the snapshots between the two destinations. But snapshots for the “other” target accumulate and are not otherwise managed.
  • --no-stream causes the snapshots for the “other” target to not be transported.

I explored this at https://github.com/HankB/Fun-with-ZFS/tree/main/syncoid-snapshots where I’ve coded scripts that create file based pools to try these things out.

Feel free to review this and point out possible problems or errors or better ways to manage this. I should probably test this between two hosts and not just pools on one host as host names are included in the snapshot names so they might matter.

Thanks!

4 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/managing-syncoid-snapshots-with-multiple-destinations/4452 Fri, 10 Apr 2026 02:19:47 +0000 No No No discourse.practicalzfs.com-topic-4452 Managing syncoid snapshots with multiple destinations
[Proxmox/TrueNAS] Confusing Myself re: Avoiding Non-Aligned block Sizes in ZFS for QEMU Storage; Sanity Check? Proxmox Hello,

I’ve got what feels like a simple-but-long-winded n00b question that I’m hoping @mercenary_sysadmin or someone else might have some insight on.

Really quick, this is my setup for QEMU-based VM disk images in Proxmox:

  1. Pool ashift: 12
  2. Volblocksize for general-purpose VM virtual disk: 64k
  3. Typical VM drive size: 32 or 64 GiB; 256 GiB (Windows 11).

Question: I assumed that as long as I used ashift=12 and volblocksize=64k, I didn’t have to worry about my QEMU zvols getting out of block alignment (is that the right way to phrase it?). Is that not correct?

How do I verify my block alignment is correct after the fact, on zVols I’m using for VMs?

What got me thinking about this?
I’m watching the development of the new iSCSI and NVME-over-TCP storage plugin for Proxmox that allows using zVol-backed iSCSI/NVME-over-TCP as shared storage for Proxmox nodes:

I’m interested in this, as I have a Proxmox node that doesn’t have a ton of internal storage and would really like to use ZFS over iSCSI to store its VM virtual disks.

I just saw a bugfix that caught my attention. Fix VM migration size mismatch by WarlockSyno · Pull Request #28 · truenas/truenas-proxmox-plugin · GitHub

What’s happening:

When Proxmox allocates the target disk on TrueNAS, alloc_image rounds the requested size up to the next multiple of zvol_blocksize. The problem is QEMU’s block mirror checks that source and target block devices are the exact same size. If the source disk isn’t already aligned to your configured blocksize (16K, 128K, etc.), the target ends up a few KiB larger and QEMU bails immediately.

In this specific case the VM’s disk was 419430856 KiB, divisible by 8K but not 16K or 128K. Both TrueNAS storages had larger blocksizes configured, so both targets came out bigger than the source by 8-57 KiB depending on the storage.

Why the rounding exists:

It was added to pre-align zvol sizes to the configured blocksize for ZFS efficiency. The logic makes sense for fresh disk creation, but it breaks migrations from storage backends that don’t share the same alignment.

The proposed fix:

Instead of rounding $bytes up, step the volblocksize down by halves until it evenly divides the requested size. Since every byte count is divisible by 512, this always terminates. A perfectly-sized zvol gets created with the full configured blocksize (no behavior change for normal disk creation); a misaligned one gets a smaller blocksize that fits exactly.

I understand problem (zvol volblocksize mismatch), and the solution–even if I would prefer to be warned about the issue and told to fix it myself rather than have an automated algorithm transparently change the volblocksize without telling me.

Fixing the underlying problem (if it exists) seems better than forcing a volblocksize change?

But what I don’t understand is this:

If the source disk isn’t already aligned to your configured blocksize (16K, 128K, etc.), the target ends up a few KiB larger and QEMU bails immediately.

That implies that I need to select specific virtual disk sizes that are divisible by my chosen volblocksize to aoid misalignment. That’s not a constraint I’ve been aware of up to now. Am I misinterpreting something? If not, what’s the best way to determine what disk size multiples go with a specific blocksize?

Is it as simple as just dividing the total number of KiB allocated to a VM’s virtual disk by the volbocksize? And if so, how do I determine that total number of KiB for zvols on a a thin provisioned pool?

6 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/proxmox-truenas-confusing-myself-re-avoiding-non-aligned-block-sizes-in-zfs-for-qemu-storage-sanity-check/4451 Wed, 08 Apr 2026 23:22:26 +0000 No No No discourse.practicalzfs.com-topic-4451 [Proxmox/TrueNAS] Confusing Myself re: Avoiding Non-Aligned block Sizes in ZFS for QEMU Storage; Sanity Check?
Join Klara's Upcoming Webinar: Cost-Efficient Storage on the New TrueNAS with Enhanced Fast Dedup TrueNAS

ZFS deduplication has traditionally come with a pretty strong warning: know your workload, or don’t enable it at all.

With Fast Dedup in newer TrueNAS releases, that guidance may be shifting.

We’re hosting a webinar that takes a closer look at what’s changed and where it actually makes sense to use:

  • How Fast Dedup differs from legacy ZFS dedup
  • Interactions with compression, block cloning (BRT), and snapshots
  • Real-world workloads with meaningful dedup potential (VMs, VDI, CI/CD, backups, etc.)
  • Whether this actually changes the RAM/performance trade-offs that made dedup risky before

April 29, 2026
8 AM PDT | 11 AM EDT

Featuring Allan Jude and Andrew Fengler from Klara Inc. featuring special guest, Chris Peredun from TrueNAS.

Curious to hear from people on PracticalZFS—are you starting to reconsider dedup with these newer approaches, or still avoiding this feature?

1 post - 1 participant

Read full topic

]]>
https://discourse.practicalzfs.com/t/join-klaras-upcoming-webinar-cost-efficient-storage-on-the-new-truenas-with-enhanced-fast-dedup/4450 Wed, 08 Apr 2026 15:57:32 +0000 No No No discourse.practicalzfs.com-topic-4450 Join Klara's Upcoming Webinar: Cost-Efficient Storage on the New TrueNAS with Enhanced Fast Dedup
Can this pool be saved? OpenZFS TL;DR - No. The pool could not be saved in the face of both drives malfunctioning. However contents have been restored from recent (previous day) backups. Details below for your viewing pleasure.

Not a rhetorical question, unfortunately. A couple months ago one of the drives (in the mirror) started playing up. When I looked into warranty, it was one day past the purchase date. I contacted WD and they provided an RMA number (props to them!) Before I sent it back, I put it in another host and ran diskroaster https://github.com/favoritelotus/diskroaster/ on it and it performed w/out error. I put it back in, added it back to the mirror and watched it resilver and scrub without any issue. I concluded I had a bad cable connection and didn’t return it. Weeks later (and while I was out of town) it stopped responding to SATA commands. On my return, I revived the RMA (which had expired by several days) but before I could pull the drive, the other drive in the mirror started developing reallocated/pending sectors at an alarming rate. The situation was:

hbarta@oak:~$ zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: scrub repaired 0B in 06:24:10 with 0 errors on Fri Mar 13 02:27:30 2026
  scan: resilvered (mirror-0) 4.29T in 10:48:18 with 0 errors on Thu Mar 12 20:03:20 2026
config:

        NAME                        STATE     READ WRITE CKSUM
        tank                        DEGRADED     0     0     0
          mirror-0                  DEGRADED     0     0     0
            wwn-0x5000cca278d16d38  FAULTED     71   167   538  too many errors
            wwn-0x5000cca291ea5db6  ONLINE       0     0     0

errors: No known data errors
hbarta@oak:~$ 

The first drive on the list is the one with reallocated sectors and the second one is the one that occasionally goes AWOL.

The situation progressed to:

root@oak:/home/hbarta/Programming/Ansible/Pi# zpool status tank -v
  pool: tank
 state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
  scan: resilvered 20.2G in 00:03:44 with 0 errors on Fri Apr  3 13:16:29 2026
config:

        NAME                        STATE     READ WRITE CKSUM
        tank                        DEGRADED     0     0     0
          mirror-0                  DEGRADED     6    40     0
            wwn-0x5000cca278d16d38  FAULTED     65   144   193  too many errors
            wwn-0x5000cca291ea5db6  ONLINE       3    44     0

errors: List of errors unavailable: pool I/O is currently suspended
root@oak:/home/hbarta/Programming/Ansible/Pi# 

After a couple reboots ZFS is recovering beyond my expectations:

root@oak:~# zpool status tank -v
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 6.07G in 00:38:49 with 0 errors on Mon Apr  6 15:28:41 2026
config:

        NAME                        STATE     READ WRITE CKSUM
        tank                        DEGRADED     0     0     0
          mirror-0                  DEGRADED     0     0     0
            wwn-0x5000cca278d16d38  DEGRADED     5     0     0  too many errors
            wwn-0x5000cca291ea5db6  ONLINE       0     0     0

errors: No known data errors
root@oak:~# 

At present I have another pool on this host that is a copy of tank. I’ve stopped processes that use the tank (an unexpected advantage of dockerized services) and plan to perform one more backupo of tank to drago_standin, export tank and rename drago_standin to tank and proceed as if everything is normal. Once everything is confirmed working, I’ll probably bring a spare host up that has sufficient drive capacity to copy tank and make another copy.

This is more excitement that I really want on a Monday morning.

Note: The first drive that was playing up is the second one in the status.

4 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/can-this-pool-be-saved/4449 Mon, 06 Apr 2026 21:20:33 +0000 No No No discourse.practicalzfs.com-topic-4449 Can this pool be saved?
My resilverings are kind of slow OpenZFS

To be honest, it was actually a lot quicker than what zpool status said. :slight_smile:

Not that it really matters that the resilvering time is incorrectly calculated/displayed, but it is worrying that the filesystem has bugs of any kind.
(Ubuntu 22.04.5 LTS, zfs-2.2.2-0ubuntu9, zfs-kmod-2.2.2-0ubuntu9.4)

I didn’t really have a question for the crowd here, just wanted to share the status message.

1 post - 1 participant

Read full topic

]]>
https://discourse.practicalzfs.com/t/my-resilverings-are-kind-of-slow/4448 Thu, 02 Apr 2026 21:31:57 +0000 No No No discourse.practicalzfs.com-topic-4448 My resilverings are kind of slow
Migrate data to a new zfs dataset with diffrent properties OpenZFS Hello

How can I migrate all the data from an existing dataset to a new dataset in a new pool?
I want only the data to be transferred.
All properties of the newly created dataset should be used, so the properties should not be transferred!

Is this possible with ZFS send/receive?
I only want the data, just as if I were using rsync.

Regards

4 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/migrate-data-to-a-new-zfs-dataset-with-diffrent-properties/4447 Wed, 01 Apr 2026 19:25:20 +0000 No No No discourse.practicalzfs.com-topic-4447 Migrate data to a new zfs dataset with diffrent properties
TRIM on SSD pools OpenZFS What’s the recommended way to set this up?

I sort of foolishly assumed that trim was enabled automatically.

According to this running zpool trim via the systemd timer is preferred to setting the autotrim property on the pool itself.

4 posts - 4 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/trim-on-ssd-pools/4441 Thu, 26 Mar 2026 00:25:54 +0000 No No No discourse.practicalzfs.com-topic-4441 TRIM on SSD pools
Is It Worth Using ZVOLs [Performance] Offtopic Chat Hello everyone,

Is it viable to use ZVOLs in production environments?

In my virtual lab tests, I’ve observed significantly lower performance compared to datasheet expectations, regardless of pool configuration or block size. While overall tuning does improve performance, it still doesn’t come close to what the datasheets suggest.

All tests were conducted on FreeBSD 15 with OpenZFS (zfs-2.4.0-rc4-FreeBSD, same version in kmod). From what I’ve researched, this seems to be a widespread issue in OpenZFS, regardless of version or operating system. I’ve also come across mentions that performance was slightly better in older OpenZFS versions.

The threads I’ve reviewed on this topic include:

Up to this point, everything relates to OpenZFS. To broaden the comparison, I decided to test Oracle ZFS on Solaris. I set up a lab using Solaris 11.4 on x86 (although I understand SPARC would be the ideal platform, I unfortunately don’t have access to that hardware). To my surprise, ZVOL performance on Solaris is very similar to what I observed with OpenZFS. This leads me to think the issue may not be specific to the implementation (OpenZFS vs. Oracle ZFS), but rather something inherent to ZVOLs themselves.

Allow me a brief aside: although Solaris is a declining operating system, I think it’s still useful as a point of comparison. I also found it interesting that it supports high availability for ZFS through Oracle Solaris Cluster 4.4, which seems relevant for production environments.

Additionally, Oracle offers a storage solution called ZFS Appliance. I’ve tested its OVA, and it appears to be a highly sophisticated system with LUN support. It’s hard to imagine that a product at that level would suffer from the same ZVOL performance limitations seen in Solaris or OpenZFS.

For this reason, I’d be very interested to hear from anyone who has worked with a real SPARC-based setup and can share their experience especially regarding ZVOL performance.

Thanks in advance.

20 posts - 5 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/is-it-worth-using-zvols-performance/4437 Mon, 23 Mar 2026 02:35:16 +0000 No No No discourse.practicalzfs.com-topic-4437 Is It Worth Using ZVOLs [Performance]
ZFS pool hotspare replace hung OpenZFS A disk in my raid z2 pool failed/is failing. It tried to replace itself with a hotspare (ata-ST18000NT001-3NF101_ZVTDW28R) but seems to have failed. I would like to remove the hot spare and manauly run the replace command. I can’t remove the disk because zfs says

cannot remove ata-ST18000NT001-3NF101_ZVTDW28R: Pool busy; removal may already be in progress

Thanks for the help.

zfs status output

      NAME                                          STATE     READ WRITE CKSUM
      storage                                       DEGRADED     0     0     0
        raidz2-0                                    ONLINE       0     0     0
          ata-WDC_WD60EFRX-68L0BN1_WD-WX52D30AVEY4  ONLINE       0     0     0
          ata-WDC_WD60EFRX-68L0BN1_WD-WX52D30AVNF8  ONLINE       0     0     0
          ata-ST6000VN0033-2EE110_ZADAG8SB          ONLINE       0     0     0
          ata-WDC_WD60EFRX-68L0BN1_WD-WX52D30AVA82  ONLINE       0     0     0
          ata-ST6000VN0033-2EE110_ZADAGFXS          ONLINE       0     0     0
          ata-ST6000VN0033-2EE110_ZADAKVHQ          ONLINE       0     0     0
        raidz2-1                                    DEGRADED     0     0     0
          ata-ST16000NM001G-2KK103_ZL21FTG0         ONLINE       0     0     0
          ata-ST16000NM001G-2KK103_ZL27RLMD         ONLINE       0     0     0
          spare-2                                   UNAVAIL     68   109    78  insufficient replicas
            ata-ST16000NM001G-2KK103_ZL28BWBD       FAULTED     25     0     0  too many errors
            ata-ST18000NT001-3NF101_ZVTDW28R        REMOVED      0     0     0
          ata-ST16000NM001G-2KK103_ZL28CETE         ONLINE       0     0     0
          ata-ST16000NE000-3UN101_ZVTEF7N0          ONLINE       0     0     0
          ata-ST16000NM001G-2KK103_ZL28JJWZ         ONLINE       0     0     0
        raidz2-2                                    ONLINE       0     0     0
          ata-ST18000NT001-3NF101_ZVTDVCF3          ONLINE       0     0     0
          ata-ST18000NT001-3NF101_ZVTDVDNR          ONLINE       0     0     0
          ata-ST18000NT001-3NF101_ZVTDVE2F          ONLINE       0     0     0
          ata-ST18000NT001-3NF101_ZVTDW28Q          ONLINE       0     0     0
          ata-ST20000NM002C-3X6103_ZXA0H7AZ         ONLINE       0     0     0
          ata-ST18000NT001-3NF101_ZVTDW2A9          ONLINE       0     0     0
        raidz2-3                                    ONLINE       0     0     0
          ata-ST8000NM0055-1RM112_ZA15VQEE          ONLINE       0     0     0
          ata-ST8000NM0055-1RM112_ZA161H3H          ONLINE       0     0     0
          ata-ST8000NM0055-1RM112_ZA161SGY          ONLINE       0     0     0
          ata-ST8000NM0055-1RM112_ZA1629FS          ONLINE       0     0     0
          ata-ST8000NM0055-1RM112_ZA162R80          ONLINE       0     0     0
          ata-ST8000VN004-2M2101_WSD80GKX           ONLINE       0     0     0
      spares
        ata-ST18000NT001-3NF101_ZVTDW28R            INUSE     currently in use

3 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/zfs-pool-hotspare-replace-hung/4436 Mon, 23 Mar 2026 02:34:58 +0000 No No No discourse.practicalzfs.com-topic-4436 ZFS pool hotspare replace hung
Optimal network transport for zfs-send/zfs-receive? OpenZFS Can anyone point me to guidelines or best practices for configuring ZFS send/receive to maximize speed over dedicated fiber between servers? The 10Gbps NICs are running near rated speed with 9k mtu (according to iperf3), but I suspect that SSH is not the quickest pipe, at least in default configuration.

  • Would something like socat be better?
  • Or, an ssh that supports the “none” cipher?
  • Enable/Disable compression in ssh and/or zfs-send?
  • Worth messing around with kernel or NIC-driver tuning options?

Before I run a bunch of experiments I thought I’d ask – doubtless others have figured this out. :thinking:

Other details: all systems are currently FreeBSD 15 RELEASE. NICs are an assortment of Intel and Broadcom. Plenty of RAM, but CPUs of varying power.

15 posts - 5 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/optimal-network-transport-for-zfs-send-zfs-receive/4431 Mon, 16 Mar 2026 23:08:24 +0000 No No No discourse.practicalzfs.com-topic-4431 Optimal network transport for zfs-send/zfs-receive?
Basic setup note for proxmox and ZFS Proxmox I would like to see a basic suggestion for setup with proxmox and ZFS. Description of routines would also be great. I find myself writing a note, but I am sure someone already have done a better job with that.

doc/basic-server-infrastructure.md · Åpent norsk industriverksted · GitLab

Do you know any good descriptions for a similar architecture?

1 post - 1 participant

Read full topic

]]>
https://discourse.practicalzfs.com/t/basic-setup-note-for-proxmox-and-zfs/4429 Sat, 14 Mar 2026 23:57:12 +0000 No No No discourse.practicalzfs.com-topic-4429 Basic setup note for proxmox and ZFS
Zarcstat bug (maybe) OpenZFS On my system OpenZFS 2.4.1 zarcstat in continuous scroll mode (e.g. zarcstat 1 for one update per second) is oblivious to changes in the number of terminal lines (i.e. stty size). It’s meant to re-print the column header just as the old header scrolls out of view.

The output of stty size is accurate and picks up changes on-the-fly. So I reckon my terminal app + session are OK here.

If this is truly a bug I figure something so obvious would’ve been caught a long time ago? Anyway, can someone please try zarcstat 1 with a very short terminal window and let me know if the header reprints as intended? Trying to not annoy OpenZFS devs unless there’s a genuine problem.

If you DO reproduce this bug, can you try again with a copy of /usr/bin/zarcstat where lines 466+467 are edited from:

data = fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ, '1234')
sz = struct.unpack('hh', data)

…to…

data = fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ, b'\x00' * 8)
sz = struct.unpack('hhhh', data)

I think the script’s except Exception: pass may be covering (i.e. silent fail) for a technique that worked in older Python.

Thanks!

3 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/zarcstat-bug-maybe/4426 Fri, 13 Mar 2026 03:24:16 +0000 No No No discourse.practicalzfs.com-topic-4426 Zarcstat bug (maybe)
Config and Workflow for Distributed Video Editors OpenZFS A friend asked me for some thoughts on a goal and I’m curious if any of you have input too.

He and three other video editors all work for one org. Right now, when they do a shoot, one person ‘owns’ the project. They take all the data home, do the edit, and publish the video. It’s educational stuff that is posted to their platform. They do not backup their data, there is no central archive, and collaboration is pretty impossible. Needing a previous clip, for instance, means you have to figure out who did a given video, and see if they have the asset you needed and can find it.

Not ideal.

This all started when he said to me last night ‘I need two Petabytes of storage’.

As we began talking the conversation turned more to a discussion about workflows, and goals. The storage is the easy part.. making it work for them is a much bigger challenge.

I think the dumb simple is they keep doing what they are doing now, but copy their assets and completed projects to a backup server. This would be an improvement but still leaves a fair bit of risk and inflexibility.

He’s a ‘Do it once’ sort of guy and while they do NOT need 2PB today, he doesn’t want to have to think about this again for 10 years. Their current data is more like 500TB, and they create about 100TB/year. I think a chassis with room to expand is probably a stronger starting place and we can add new pools each year, or whatever, to keep up with expansion. No one is expected to be editing off of whatever the ‘central host’ is. And, I feel like buying that much storage to sit idle is a waste at this stage. I’d rather have a plan for meeting their needs without locking them into dozens of disks that will go EOL before they get much data on them.

I started to wonder if it might be wise to look at one central unit with all the data, and then each Editor then having a smaller unit with a subset of working data. They edit in a mix of NLEs, but perhaps they could create a dataset per project?

The goal would be that an editor makes a dataset, loads the data in and gets to work. The ZFS box at home, perhaps small TrueNAS Minis or something, would then replicate that up to the main system. The next piece though would be allowing an editor to ‘check out’ a project from the main system and have it replicate down to their unit …

I know of no tooling to do that smoothly. Replication in one direction? Sure. Easy. But, it’s this idea of taking a project local from the main host that gives me pause. These are video editors.. smart people to be sure .. but not nerds.

I don’t expect I’ll have any success getting them to craft custom and ever changing sanoid.conf files..

Other ideas? Am I missing the obvious? (I hope so…)

5 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/config-and-workflow-for-distributed-video-editors/4425 Thu, 12 Mar 2026 19:37:29 +0000 No No No discourse.practicalzfs.com-topic-4425 Config and Workflow for Distributed Video Editors
Media Server/Jellyfin OpenZFS Hello

I’m very much so a beginner when it comes to servers, linux and ZFS, so please bare with me.

I have a server I had created back in 2020 and it’s worked good (enough) but I wanted to get a more robust understanding of my setup, see where improvements can be made or figure out which things will be issues that I have to be aware of.

Current Setup:
i7-9700k
RTX 2060
64GB 2666 (non-ECC) (I thought I had 128GB but looking looking at the specs of my mobo, I’m at work currently, it says it caps out at 64GB)
LSI 9300-16i
4TB (x8) WD Enterprise (media)
512GB nvme (OS)
1000W PSU

OS info:
Ubuntu (not remembering if I’m using 22 or 24)
ZFS Raidz1
Jellyfin

I don’t do much else on the server, I also don’t use containers (I haven’t learned much about them and haven’t felt like it would be needed, yet). I’ve restarted the server more than once either do to running a command that broke something or running an update that somehow corrupted the kernel and I couldn’t figure out how to get everything running smoothly. I say that to say, re-imaging hasn’t been a slow process (outside of re-doing my jellyfin config setup).

Questions I have:

  1. I don’t run my server 24/7, only when I want to watch something. Is it hurting my drives/server that I don’t have it on all day? The fact that I turn it on and off more than once a week?
  2. I know that I benefit from ECC RAM but would I benefit greatly from upgrading my CPU, RAM speeds or even using ECC, for my use case?
  3. Are there any commands worth looking into that improve the quality of the server or ZFS? I have set up my compression a certain way (again, at work, so I don’t fully remember how to verbalize that at the moment).
  4. Is it worth creating a scratch disk for my server (I have extra drives, SSD drives as well, SATA ports as well as ports on the PCIe card)
  5. Should I think about improving my GPU? At the moment, I live alone, so I never stream more than 1 device at a time. For what I have, is the only benefit for the streams? It seems to do 4K DV with DTS Master HD just fine on certain films.
  6. Are there more inexpensive GPU’s worth looking at if the 2060 ever shits the bed or I need to repurpose it for anything? My main concern is streaming 4K DV and being able to handle any audio channels each movie/tv show has (I only use 5.1 systems at the most). I have gotten conflicting information on what minimum GPU is needed for do all of that with no issues.
  7. Is my 1000W PSU overkill for everything I have? Is anything else on my build overkill for my use case?

11 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/media-server-jellyfin/4423 Tue, 10 Mar 2026 17:19:37 +0000 No No No discourse.practicalzfs.com-topic-4423 Media Server/Jellyfin
Does ZBM need to be "installed?" [SOLVED] OpenZFS I’ve downloaded the pre-compiled ZBM .efi to my EFI system partition, used efibootmgr to poke an entry into NVRAM, and successfully booted into it.

I can also boot into rEFInd which auto-detects said .efi and can chainload it, passing commandline arguments defined in /boot/refind_linux.conf. No boot stanzas in loader/entries needed.

Either way it sees my pool. I presume I can zpool set <property> to give ZBM whatever else it might need (I’m not there yet – I’m early in the process of moving an existing Fedora install onto ZFS).

So why would I want/need to “install” ZBM? I love this project but it needs a cheat-sheet for ADD-sufferers. The docs are good but it needs a single page to tie it all together. They have step-by-step instructions for new builds but nothing for brownfield conversions.


2nd part of this question: What’s actually required on the kernel command line to boot into a ZFS-on-root Linux system? Right now I have root=ZFS=tank/ROOT/fedora boot=zfs and all my datasets are canmount=noauto to lay the groundwork for a multi-distro ZfsBootMenu future. Does the above root= directive tell the system to go ahead and mount this pool anyway?

Does my pool need property bootfs set to tank/ROOT/fedora or should it remain blank? Seems counterproductive if I want to boot a distro in a different dataset.

I get to choose between zfs-mount.service or having zfs-mount-generator read /etc/zfs/zfs-list.cache? Why would I want the latter?

fstab will become obsolete when I’m finished?

I have so many questions about this “zero administration” filesystem…

10 posts - 4 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/does-zbm-need-to-be-installed-solved/4419 Sun, 08 Mar 2026 01:41:45 +0000 No No No discourse.practicalzfs.com-topic-4419 Does ZBM need to be "installed?" [SOLVED]
Syncoid permissions on remote Sanoid I am sure that I am doing something simple wrong.

I set up a new backup server and moved two HDDs which were in a zfs mirror on another computer to the new one. Then i imported them and upgraded the pool (since the version of ubuntu is different). Everything looks fine. So far no errors on the scrub.

I am trying to use syncoid to send the latest snapshots from my main server to the backup but am getting a permissions error:

Here is the command I am using on my production server:

/usr/sbin/syncoid --no-sync-snap --no-privilege-elevation --create-bookmark -r rpool [email protected]:backup/encrypt/rpool

Here is the error:

Sending incremental rpool/data/encrypt_lxc/vm-112-disk-0@autosnap_2025-10-07_10:30:00_monthly ... autosnap_2026-03-07_13:00:04_hourly (~ 55.0 GB):
cannot receive incremental stream: permission denied
mbuffer: error: outputThread: error writing to <stdout> at offset 0x250000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe 

I followed the steps from here. That is supposed to be done on the remote host, right?

What am I doing wrong?

6 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/syncoid-permissions-on-remote/4416 Sat, 07 Mar 2026 14:09:32 +0000 No No No discourse.practicalzfs.com-topic-4416 Syncoid permissions on remote
One pool, multiples datasets OpenZFS Hi all, currently I’ve one disk with one pool but multiple datasets for 3 distros, so I’ve 7 mountpoint, 3 for roots, 3 for homes and one shared.
Currently 3 homes and shared are mount on legacy mode but I had read this on guide of zfsbootmenu:
"It is important to set the property canmount=noauto on any file systems with mountpoint=/ (that is, on any additional boot environments you create). Without this property, the OS will attempt to automount all ZFS file systems and fail when multiple file systems attempt to mount at /; this will prevent your system from booting. Automatic mounting of / is not required because the root file system is explicitly mounted in the boot process.

Also note that, unlike many ZFS properties, canmount is not inheritable. Therefore, setting canmount=noauto on zroot/ROOT is not sufficient, as any subsequent boot environments you create will default to canmount=on. It is necessary to explicitly set the canmount=noauto on every boot environment you create."

So I’m thinking, if I set homes with canmount=noauto can I use zfs mount for home bypassing use fstab?

3 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/one-pool-multiples-datasets/4414 Sat, 07 Mar 2026 04:34:23 +0000 No No No discourse.practicalzfs.com-topic-4414 One pool, multiples datasets
Homelab NAS build and architecture questions Proxmox Hello!

Big fan of the forum, the LNL family podcasts, ZFS, and Linux in general. Even after 4 years of homelabbing and learning Linux I still feel like such a noob and am terribly indecisive and insecure in making long term architecture decisions for myself and wanted to run my plan by the brilliant minds here in the forum if that’s okay.

My background:

I have been slowly leveling up my Linux/ZFS knowledge and hardware over the past 4 years. I started with an old Mac mini with proxmox installed and OWC cages attached via thunderbolt, to an intel nuc with an OWC cage attached via usb (not ideal, I know). I currently use sanoid/syncoid and datasets are replicated to backup drives locally and offsite. I’ve run into very few issues so I must be at least somewhat competent even though my hardware is not ideal.

New build

I have finally invested in a proper NAS build so I can both expand as I reach 80% used with my current setup and properly attach drives directly via SATA. I have 6 12tb drives on the way. I’m leaning towards putting them all into a raidz2 pool and using proxmox to manage ZFS and my LXC containers (mostly local/offsite media streaming, but a few websites, pihole, &c. standard homelab stuff) since I’m rather comfortable and experienced with it. But my shiny object syndrome is making me want to take this migration opportunity to learn something new, I know I could go Ubuntu/debian with qemu and virt-manager or whatever but hesitant to make such a big switch for a ‘production’ server.

Questions

How would you architect/layout (6) 12TB drives for a new NAS homelab? Is L2arc and SLOG even necessary for my use case? I’m not as worried about performance as I am finding a middle ground between maximizing both redundancy and capacity.

If I decide to go this route, Can you transfer proxmox LXC backups to virt-manager painlessly? The containers are all backed up as ZFS datasets I believe, proxmox does so much I don’t even realize.

What else should I be considering for this sort of upgrade?

Sorry for being such a noob, I think I’m just looking for some reassurance in sticking with what I am familiar with and guidance with my intended pool layout. I also wanted to finally participate in the forum rather than talk to an inept chatbot.

Huge TIA to the community, I wouldn’t know anything at all without everyone here being so generous in sharing their valuable knowledge.

5 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/homelab-nas-build-and-architecture-questions/4412 Mon, 02 Mar 2026 18:34:14 +0000 No No No discourse.practicalzfs.com-topic-4412 Homelab NAS build and architecture questions
Secure your spot for our upcoming webinar "Open-Source Storage for European Sovereignty: How Entersekt Chose ZFS" OpenZFS

Data sovereignty isn’t just a policy discussion but an architectural discussion.

Open-source storage is a practical foundation for meeting European data sovereignty requirements in production environments with strict security and compliance needs.
Allan Jude, Klara Co-founder and Head of Solutions Architecture, is joined by Eirik Øverby, COO at Entersekt, to discuss why Entersekt chose ZFS as the foundation for its storage platform and how that decision supports EU data sovereignty, security, and operational control.

Learn:

  • Why Entersekt selected ZFS for its EU sovereign databases
  • How open-source storage fosters control, compliance, and transparency
  • Key architectural and operational decisions
  • Lessons learned from running ZFS in a security sensitive environment

Join us live: Open-Source Storage for European Sovereignty: How Entersekt Chose ZFS - Klara Systems

1 post - 1 participant

Read full topic

]]>
https://discourse.practicalzfs.com/t/secure-your-spot-for-our-upcoming-webinar-open-source-storage-for-european-sovereignty-how-entersekt-chose-zfs/4411 Mon, 02 Mar 2026 17:27:02 +0000 No No No discourse.practicalzfs.com-topic-4411 Secure your spot for our upcoming webinar "Open-Source Storage for European Sovereignty: How Entersekt Chose ZFS"
Post Fangtooth upgrade, can't destroy invalid duplicate boot-pool [SOLVED] TrueNAS I just performed a TrueNAS upgrade via the GUI from 24.10.2.4 (EE) to 25.04.2.6 (FT).

When it rebooted after the upgrade, I’m now getting an error when it tries to import “boot-pool”

It says there are more than one pool named boot-pool and to mount the correct one by ID and then exit. /sbin/zpool import shows both boot-pools (and the data pool). One boot-pool is the proper one and the other looks like so:

  pool: boot-pool
    id: 1885521082526155599
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:

        boot-pool    UNAVAIL  insufficient replicas
          nvme0n1p4  UNAVAIL  invalid label

It is indeed invalid as the partition it’s pointing to is the swap partition. I’m now stuck. I can’t rename it with zpool import because it’s invalid and won’t import and I can’t zpool destroy it using the ID. Obviously I don’t want to destroy it by name because real boot-pool would likely go poof.

I could really use some suggestions.

Thanks in advance
-Pyrroc

2 posts - 1 participant

Read full topic

]]>
https://discourse.practicalzfs.com/t/post-fangtooth-upgrade-cant-destroy-invalid-duplicate-boot-pool-solved/4410 Mon, 02 Mar 2026 03:10:16 +0000 No No No discourse.practicalzfs.com-topic-4410 Post Fangtooth upgrade, can't destroy invalid duplicate boot-pool [SOLVED]
zfs snapshots on Ubuntu Server OpenZFS I’m trying to setup a new host in my homelab, and I’d like to have the zfs snapshot functionality available. However, I can’t seem to find anything on setting up a server distro with zfs. Which makes me wonder if there’s a reason I can’t find anything.

Is it worthwhile, or am I wasting my time?

5 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/zfs-snapshots-on-ubuntu-server/4408 Sat, 28 Feb 2026 19:56:52 +0000 No No No discourse.practicalzfs.com-topic-4408 zfs snapshots on Ubuntu Server
Migrating VMs to new root OpenZFS If I am importing, my KVM virtual machines onto a new root installation, are the snapshots that were taken by Sanoid under the old installation still good? I am restarting regular snapshots, I just wasn’t sure if older snapshots were worth keeping or not.

2 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/migrating-vms-to-new-root/4403 Tue, 24 Feb 2026 17:30:32 +0000 No No No discourse.practicalzfs.com-topic-4403 Migrating VMs to new root
OpenTofu For Managing LibVirt VM deployment Offtopic Chat Hello All,

Looking for some recommendations and insights.

I would like to deploy copies of an existing template Libvirt VM; which has been previously provisioned with RHEL and all needed applications. I am looking to try an automate this process by doing the following.

  1. copy the template vm to a new vm-name
  2. change the hostname of the copied Linux VM
  3. change the static IP address(static IP is a requirement)
  4. launch the newly provisioned linux-vm

Is OpenTofu a good choice for doing this or is there a simpler approach?

Any ideas or suggestions, would be very much appreciated.

Thanks!

5 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/opentofu-for-managing-libvirt-vm-deployment/4402 Tue, 24 Feb 2026 13:56:35 +0000 No No No discourse.practicalzfs.com-topic-4402 OpenTofu For Managing LibVirt VM deployment
New ZFS User Replication Blues TrueNAS I have the latest version of TrueNAS scale as my client. On the server / replication destination it’s Debian Bookworm:
‘’’
root@rescue ~ # lsb_release -a && zfs --version
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
zfs-2.4.0-1
zfs-kmod-2.4.0-1
‘’’

My initial replication moved over 2 of my smaller-ish data sets just fine, but when it hit the third, which is much larger, it’s just been sitting there for days so I grew suspicious. When I ran an strace on the zfs recv I see sitting there I got:
‘’’
root@rescue /backuppool/received-backups # strace -p 222489
strace: Process 222489 attached
ioctl(4, ZFS_IOC_RECV_NEW, 0x7ffc1ed0f450) = -1 EBADE (Invalid exchange)
ioctl(3, ZFS_IOC_OBJSET_STATS, 0x7ffc1ed13750) = 0
write(2, “cannot receive resume stream: ch”…, 512) = 512
rmdir(“/backuppool/received-backups/homes”) = -1 EROFS (Read-only file system)
readlink(“/proc/self/ns/user”, “user:[4026531837]”, 127) = 17
ioctl(3, ZFS_IOC_OBJSET_STATS, 0x7ffc1ed11c40) = 0
close(3) = 0
close(4) = 0
exit_group(1) = ?
+++ exited with 1 +++
‘’’

This seems … Not so good :slight_smile: Does anyone have any thoughts on what I should do and how I can/should recover?

Thanks in advance!

lUpdate: Maybe it’s actually working OK? I took another look..

‘’’
root@rescue ~ # ps -deaf | grep zfs
root 222676 222667 0 21:59 ? 00:00:00 sh -c PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin zfs recv -s -F -x mountpoint -x sharesmb -x sharenfs backuppool/received-backups/homes
root 222679 222676 0 21:59 ? 00:00:37 zfs recv -s -F -x mountpoint -x sharesmb -x sharenfs backuppool/received-backups/homes
root 225476 224218 0 23:07 pts/1 00:00:00 grep zfs
root@rescue ~ OpenZFS
‘’’
and if I strace that first zsh recv process I see a TON of happy looking writes:

(Tiny sample)
‘’’
root@rescue ~ # ps -deaf | grep zfs
root 222676 222667 0 21:59 ? 00:00:00 sh -c PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin zfs recv -s -F -x mountpoint -x sharesmb -x sharenfs backuppool/received-backups/homes
root 222679 222676 0 21:59 ? 00:00:37 zfs recv -s -F -x mountpoint -x sharesmb -x sharenfs backuppool/received-backups/homes
root 225476 224218 0 23:07 pts/1 00:00:00 grep zfs
root@rescue ~ #
‘’’

So maybe it’s working as advertised and TrueNAS’s replication GUI just doesn’t update when it’s in the middle of a long operaiton?]

9 posts - 2 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/new-zfs-user-replication-blues/4398 Mon, 23 Feb 2026 21:16:06 +0000 No No No discourse.practicalzfs.com-topic-4398 New ZFS User Replication Blues
Reading metrics from old snapshots Sanoid Hi

I would like to build some understanding around snapshots. Sanoid/syncoid seems to work fine. For example, subvol-109-disk-0 seems to be replicated from prod1 to pc12 as expected.

root@prod1:/srv/subvol-109-disk-0# ls
bin boot dev etc home lib lib64 media mnt moodle-backup opt proc root run sbin srv sys tmp usr var

havard@pc12:/zfs/prod1/subvol-109-disk-0$ ls
bin boot dev etc home lib lib64 media mnt moodle-backup opt proc root run sbin srv sys tmp usr var

We have access to the latest snapshot. That is cool. Also, it seems to be possible to look at older snapshots. That is very cool. Like this:

root@pc12:/zfs/prod1/subvol-109-disk-0/.zfs/snapshot/syncoid_ls3_2024-12-12:19:10:10-GMT00:00# ls
bin boot dev etc home lib lib64 media mnt moodle-backup opt proc root run sbin srv sys tmp usr var

So we have a boatload of snapshots on PC12. Now, I would like to read out some metrics from these snapshots. For example:

  • when did changes occur in a folder
  • when was a file changed
  • when was a file deleted
  • how many files was added to a folder

Do you know any good tools for tasks like this?

4 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/reading-metrics-from-old-snapshots/4397 Sun, 22 Feb 2026 22:03:46 +0000 No No No discourse.practicalzfs.com-topic-4397 Reading metrics from old snapshots
Syncoid Issue: cannot restore to snapshot_18:00:03_hourly: destination already exists Sanoid Hey everyone

I’ve been setting up syncoid on a fresh install of proxmox 9.1.4 (zfs-2.3.4-pve1) with Sanoid/Syncoid (2.2.0) and ran into a “destination already exists” error.
The server has two exos 16 TB HDDs, with one having the dataset tank and the other tank-backup. The rationale behind it is to take snapshots to roll back if necessary, replicating to the second HDD as internal backup if the first harddisk fails, as well as to two additional 16 TB exos on the backup server for remote backup.

├── homeserver
│   ├── HDD1
│   │   └── tank
│   └── HDD2
│       └── tank-backup (internal backup)
│
└── backupserver (1Gbit connection)
    ├── HDD1
    │   └── backupserver-tank-backup-1 (external backup)
    └── HDD2
        └── backupserver-tank-backup-2 (external backup)

This issue arises so far only for the internal backup. The external backup is done via a user that has zfs allow send,hold which so far worked, whereas the internal backup is done with root. Sanoid is set as shown below, pruning being the same for tank and tank-backup, autosnap being deactivated for tank-backup. All datasets are encrypted, tank has the key loaded, tank-backup not.

I’m not quite sure where to pinpoint the issue to. I tried so far:

  • explicitly giving root receive,create,destroy,rollback,hold,release for the datasets in question
  • set tank-backup to readonly
  • adjust the syncoid.servie to set a flock in case multiple runs were the issue
  • staggered the timing compared to sanoid snapshotting
  • –no-privilege-elevation tag (in unisono with explicitly zfs allow what the user can do, in this case everything)

The manual solution is destroying the destination snapshot that causes the error, resetting the service and manually run it.

The only difference I have not yet tried on the main host compared to the backupserver is adding --recvoptions=“u”. There are in total four datasets that are synced this way, the one that created the least problem is tank/appdata/photos, which serves as storage for a photo-owncloud instance (and thus sees regular changes), the others are for immich and owncloud-data, both services have not yet been set up.

Thanks a lot for any inputs or suggestions!

### zfs datasets in question
tank/backups/pbs-config            14.1M  9.44T  8.80M  /tank/backups/pbs-config
tank/backups/pve-config            30.9M  9.44T  23.7M  /tank/backups/pve-config
tank-backup/backups/pbs-config     13.6M  9.44T  8.77M  none
tank-backup/backups/pve-config     30.7M  9.44T  23.7M  none
### systemctl status syncoid-config-backups.service

× syncoid-config-backups.service - Syncoid replicate config backup datasets
     Loaded: loaded (/etc/systemd/system/syncoid-config-backups.service; static)
     Active: failed (Result: exit-code) since Fri 2026-02-20 07:41:35 CET; 37s ago
 Invocation: 3297e23ba82b457894d81c0d4953fea7
TriggeredBy: ● syncoid-config-backups.timer
    Process: 967546 ExecStart=/usr/sbin/syncoid --no-sync-snap --create-bookmark --sendoptions=w --delete-target-snapshots tank/backups/pve-config tank-backup/backups/pve>
    Process: 967800 ExecStart=/usr/sbin/syncoid --no-sync-snap --create-bookmark --sendoptions=w --delete-target-snapshots tank/backups/pbs-config tank-backup/backups/pbs>
   Main PID: 967800 (code=exited, status=2)
   Mem peak: 18M
        CPU: 945ms

Feb 20 07:41:34 homeserver syncoid[967635]: mbuffer: warning: HOME environment variable not set - unable to find defaults file
Feb 20 07:41:35 homeserver syncoid[967546]:  zfs destroy 'tank-backup/backups/pve-config'@autosnap_2026-02-20_06:00:15_hourly failed: could not find any snapshots to dest>
Feb 20 07:41:35 homeserver syncoid[967800]: NEWEST SNAPSHOT: autosnap_2026-02-20_06:00:15_hourly
Feb 20 07:41:35 homeserver syncoid[967800]: Sending incremental tank/backups/pbs-config@autosnap_2026-02-19_17:00:09_hourly ... autosnap_2026-02-20_06:00:15_hourly (~ 72 >
Feb 20 07:41:35 homeserver syncoid[967832]: mbuffer: warning: HOME environment variable not set - unable to find defaults file
Feb 20 07:41:35 homeserver syncoid[967829]: cannot restore to tank-backup/backups/pbs-config@autosnap_2026-02-19_18:00:03_hourly: destination already exists
Feb 20 07:41:35 homeserver syncoid[967800]: CRITICAL ERROR:  zfs send -w  -I 'tank/backups/pbs-config'@'autosnap_2026-02-19_17:00:09_hourly' 'tank/backups/pbs-config'@'au>
Feb 20 07:41:35 homeserver systemd[1]: syncoid-config-backups.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Feb 20 07:41:35 homeserver systemd[1]: syncoid-config-backups.service: Failed with result 'exit-code'.
Feb 20 07:41:35 homeserver systemd[1]: Failed to start syncoid-config-backups.service - Syncoid replicate config backup datasets.
### cat syncoid-config-backups.service 

[Unit]
Description=Syncoid replicate config backup datasets

[Service]
Type=oneshot
ExecStart=/usr/sbin/syncoid --no-sync-snap --create-bookmark --sendoptions="w" --delete-target-snapshots tank/backups/pve-config tank-backup/backups/pve-config
ExecStart=/usr/sbin/syncoid --no-sync-snap --create-bookmark --sendoptions="w" --delete-target-snapshots tank/backups/pbs-config tank-backup/backups/pbs-config
### cat syncoid-config-backups.timer 

[Unit]
Description=Daily syncoid replication for config backups

[Timer]
OnCalendar=*-*-* 01:20:00
Persistent=true
RandomizedDelaySec=120
Unit=syncoid-config-backups.service

[Install]
WantedBy=timers.target
### Sanoid.conf excerpts (runs every 15 min)
[template_config]
        autosnap = yes
        autoprune = yes
        hourly = 0
        daily = 60
        weekly = 16
        monthly = 12
        yearly = 2

[template_backupconfig]
        autosnap = no
        autoprune = yes
        hourly = 0
        daily = 60
        weekly = 16
        monthly = 12
        yearly = 2

[tank/backups/pve-config]
        use_template = template_config

[tank/backups/pbs-config]
        use_template = template_config

[tank-backup/backups/pve-config]
        use_template = template_backupconfig

[tank-backup/backups/pbs-config]
        use_template = template_backupconfig
`# snapshots present at the time of error, truncated

=== SOURCE === -> tank for dataset tank/backups/pbs-config
(truncated)
tank/backups/pbs-config@autosnap_2026-02-19_16:00:02_hourly
tank/backups/pbs-config@autosnap_2026-02-19_17:00:09_hourly
tank/backups/pbs-config@autosnap_2026-02-19_18:00:03_hourly
tank/backups/pbs-config@autosnap_2026-02-19_19:00:15_hourly
tank/backups/pbs-config@autosnap_2026-02-19_20:00:14_hourly
tank/backups/pbs-config@autosnap_2026-02-19_21:00:16_hourly
tank/backups/pbs-config@autosnap_2026-02-19_22:00:03_hourly
tank/backups/pbs-config@autosnap_2026-02-19_23:00:16_hourly
tank/backups/pbs-config@autosnap_2026-02-20_00:00:22_daily
tank/backups/pbs-config@autosnap_2026-02-20_00:00:22_hourly
tank/backups/pbs-config@autosnap_2026-02-20_01:00:01_hourly
tank/backups/pbs-config@autosnap_2026-02-20_02:00:01_hourly
tank/backups/pbs-config@autosnap_2026-02-20_03:00:00_hourly
tank/backups/pbs-config@autosnap_2026-02-20_04:00:01_hourly
tank/backups/pbs-config@autosnap_2026-02-20_05:00:01_hourly
tank/backups/pbs-config@autosnap_2026-02-20_06:00:15_hourly
=== DEST === -> tank-backup for dataset tank/backups/pbs-config
(truncated)
tank-backup/backups/pbs-config@autosnap_2026-02-19_16:00:02_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-19_17:00:09_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-19_18:00:03_hourly <== Seems to be the issue?
tank-backup/backups/pbs-config@autosnap_2026-02-19_19:00:14_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-19_20:00:14_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-19_21:00:16_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-19_22:00:03_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-19_23:00:16_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_00:00:23_daily
tank-backup/backups/pbs-config@autosnap_2026-02-20_00:00:23_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_01:00:01_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_02:00:01_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_03:00:01_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_04:00:00_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_05:00:01_hourly
tank-backup/backups/pbs-config@autosnap_2026-02-20_06:00:16_hourly
=== SOURCE === -> tank for dataset tank/backups/pve-config
(truncated)
tank/backups/pve-config@autosnap_2026-02-19_16:00:02_hourly
tank/backups/pve-config@autosnap_2026-02-19_17:00:09_hourly
tank/backups/pve-config@autosnap_2026-02-19_18:00:02_hourly
tank/backups/pve-config@autosnap_2026-02-19_19:00:14_hourly
tank/backups/pve-config@autosnap_2026-02-19_20:00:15_hourly
tank/backups/pve-config@autosnap_2026-02-19_21:00:15_hourly
tank/backups/pve-config@autosnap_2026-02-19_22:00:02_hourly
tank/backups/pve-config@autosnap_2026-02-19_23:00:16_hourly
tank/backups/pve-config@autosnap_2026-02-20_00:00:22_daily
tank/backups/pve-config@autosnap_2026-02-20_00:00:22_hourly
tank/backups/pve-config@autosnap_2026-02-20_01:00:01_hourly
tank/backups/pve-config@autosnap_2026-02-20_02:00:00_hourly
tank/backups/pve-config@autosnap_2026-02-20_03:00:01_hourly
tank/backups/pve-config@autosnap_2026-02-20_04:00:01_hourly
tank/backups/pve-config@autosnap_2026-02-20_05:00:00_hourly
tank/backups/pve-config@autosnap_2026-02-20_06:00:16_hourly
=== DEST ===-> tank for dataset tank/backups/pve-config
(truncated)
tank-backup/backups/pve-config@autosnap_2026-02-19_16:00:02_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_17:00:09_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_18:00:02_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_19:00:14_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_20:00:15_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_21:00:15_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_22:00:02_hourly
tank-backup/backups/pve-config@autosnap_2026-02-19_23:00:16_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_00:00:22_daily
tank-backup/backups/pve-config@autosnap_2026-02-20_00:00:22_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_01:00:01_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_02:00:00_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_03:00:01_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_04:00:01_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_05:00:00_hourly
tank-backup/backups/pve-config@autosnap_2026-02-20_06:00:16_hourly`

7 posts - 3 participants

Read full topic

]]>
https://discourse.practicalzfs.com/t/syncoid-issue-cannot-restore-to-snapshot-1803-hourly-destination-already-exists/4394 Fri, 20 Feb 2026 13:19:47 +0000 No No No discourse.practicalzfs.com-topic-4394 Syncoid Issue: cannot restore to snapshot_18:00:03_hourly: destination already exists