Thank you for visiting!
My little window on internet allowing me to share several of my passions
Categories:
- OpenBSD
- Firewall
- got
- FreeBSD
- PEKwm
- Zsh
- Nvim
- VM
- High Availability
- vdcron
- My Sysupgrade
- FreeBSD
- Nas
- VPN
- DragonflyBSD
- fapws
- Alpine Linux
- Openbox
- Desktop
- Security
- yabitrot
- nmctl
- Tint2
- Project Management
- Hifi
- Alarm
Most Popular Articles:
Last Articles:
ZFS send and ZFS received do not only send your files
Posted on 2026-03-06 21:15:00 from Vincent in got
It's important to understand that ZFS is not only a filesystem. It's also a set of parameters. And those parameters are send together with the files. One of the main paramtere is the mountpoint. Indeed, we must avoid that several datasets have the same mountpoints: /var, /home, ... This blog post will explain some précautions before sending datasets of several machines to a central machine.

Backing up a FreeBSD machine to a NAS using ZFS send/receive
The goal
The idea is simple: replicate ZFS datasets from a FreeBSD machine to a NAS using zfs send piped into zfs receive over SSH. The NAS holds the backup pool (in my case naspool), and the source machines each get their own subtree under something like naspool/nas/machines/<hostname>.
While the concept is straightforward, there are a few pitfalls that can turn a routine backup operation into a bad day. This post documents what I learned the hard way.
The mountpoint trap
When you do a zfs send | zfs receive, the receiving end does not just get the data — it also gets all the ZFS properties that were set on the source datasets, including mountpoint. This means that if the source machine had /var/log mounted on rpool/var/log, the received dataset on the NAS will also try to mount itself at /var/log. If the NAS has its own dataset at that path, you end up with two datasets fighting over the same mountpoint. The last one to mount wins, and the other silently disappears from view.
This is exactly the kind of problem that is invisible until something breaks.
The fix is to always force mountpoint=none at receive time, so the backup datasets never auto-mount on the NAS:
zfs send rpool@snapshot | ssh nas "zfs receive -o mountpoint=none -u naspool/nas/machines/myhostname"
The -o mountpoint=none overrides whatever mountpoint property came in the stream. The -u flag tells ZFS not to mount the received datasets at all during the operation. Together they ensure the backup lands quietly on the NAS without interfering with anything.
Initial full backup
Before you can do incrementals, you need to send a full snapshot. On the source machine, create a snapshot and send it:
# On the source machine
zfs snapshot -r rpool@backup_1
zfs send -R rpool@backup_1 | ssh nas "zfs receive -o mountpoint=none -u naspool/nas/machines/myhostname"
The -R flag sends the dataset recursively, including all child datasets and the snapshot itself.
Note that the parent path on the NAS must already exist before receiving:
# On the NAS, if needed
zfs create -o mountpoint=none naspool/nas
zfs create -o mountpoint=none naspool/nas/machines
Incremental backups
Once the initial backup exists, subsequent sends only need to transfer what changed:
# On the source machine
zfs snapshot -r rpool@backup_2
zfs send -R -i rpool@backup_1 rpool@backup_2 | ssh nas "zfs receive -x mountpoint naspool/nas/machines/myhostname"
Note the subtle difference: for incremental receives, use -x mountpoint instead of -o mountpoint=none. The -x flag tells the receiver to ignore the mountpoint property from the stream and keep whatever is already set locally. Since the initial receive already set mountpoint=none, this preserves that setting across all future incrementals.
For security, we could even add -x canmount and -x readonly.
The received property problem
One tricky aspect of ZFS is that properties arriving via zfs receive are stored with a source of received, not local. This matters because received properties behave like local ones — they are not overridden by parent inheritance. So if you try to fix a rogue mountpoint by setting mountpoint=none on a parent dataset, child datasets with received mountpoints will ignore it entirely.
You can spot this with:
zfs get -r mountpoint naspool/nas/machines/myhostname
If you see received in the SOURCE column, those datasets will mount themselves regardless of what the parent says. The only reliable fix is to use -r to set the property recursively and locally on every dataset in the subtree, after force-unmounting anything that got mounted:
zfs unmount -f naspool/nas/machines/myhostname/var/log
# ... repeat for each mounted dataset
zfs set -r mountpoint=none naspool/nas/machines/myhostname
This is why getting the receive command right from the start saves a lot of pain later.
Accessing backup data
Since backup datasets have mountpoint=none, they will never auto-mount. If you need to browse the contents of a backup — to restore a file, for example — mount it temporarily to a safe path:
zfs set mountpoint=/mnt/restore naspool/nas/machines/myhostname/var/log
ls /mnt/restore
# ... do what you need ...
zfs set mountpoint=none naspool/nas/machines/myhostname/var/log
Summary: the safe receive command
For an initial full backup:
zfs send -R rpool@snapshot | ssh nas "zfs receive -o mountpoint=none -u naspool/nas/machines/myhostname"
For incremental backups:
zfs send -R -i rpool@snap_old rpool@snap_new | ssh nas "zfs receive -x mountpoint naspool/nas/machines/myhostname"
These two commands, used consistently, will keep your backup pool clean, your NAS stable, and your mounts free of conflicts.
One last thing: keep a remote backup of your backup
If you manage backups on a NAS, consider also replicating critical data to a second remote machine. On the day you accidentally destroy a dataset while trying to fix a mountpoint conflict, you will be very glad you did.