zfs-mount-generator
—
generate systemd mount units for ZFS filesystems
zfs-mount-generator
is a
systemd.generator(7)
that generates native
systemd.mount(5)
units for configured ZFS datasets.
- mountpoint=
- Skipped
if
legacy
or none.
- canmount=
- Skipped
if off. Skipped if
only noauto datasets exist for a given mountpoint
and there's more than one. Datasets with
yes
take precedence over ones with
noauto for the same mountpoint.
Sets logical noauto
flag if noauto. Encryption roots
always generate
zfs-load-key@root.service,
even if off.
- atime=,
relatime=,
devices=,
exec=,
readonly=,
setuid=,
nbmand=
- Used to generate mount options equivalent to
zfs
mount
.
- encroot=,
keylocation=
- If the dataset is an encryption root, its mount unit will bind to
zfs-load-key@root.service,
with additional dependencies as follows:
The service also uses the same Wants=,
After=, Requires=,
and RequiresMountsFor=, as the
mount unit.
- org.openzfs.systemd:requires=path[
path]…
- Sets
Requires= for the mount- and key-loading unit.
- org.openzfs.systemd:requires-mounts-for=path[
path]…
- Sets
RequiresMountsFor= for the mount- and key-loading
unit.
- org.openzfs.systemd:before=unit[
unit]…
- Sets
Before= for the mount unit.
- org.openzfs.systemd:after=unit[
unit]…
- Sets
After= for the mount unit.
- org.openzfs.systemd:wanted-by=unit[
unit]…
- Sets logical noauto
flag (see below). If not
none, sets
WantedBy= for the mount unit.
- org.openzfs.systemd:required-by=unit[
unit]…
- Sets logical noauto
flag (see below). If not
none, sets
RequiredBy= for the mount unit.
- org.openzfs.systemd:nofail=(unset)|on|off
- Waxes or wanes strength of default reverse dependencies of the mount unit,
see below.
- org.openzfs.systemd:ignore=on|off
- Skip
if on. Defaults to
off.
Additionally, unless the pool the dataset resides on is imported
at generation time, both units gain
Wants=zfs-import.target and
After=zfs-import.target.
Additionally, unless the logical noauto flag is
set, the mount unit gains a reverse-dependency for
local-fs.target of strength
Because ZFS pools may not be available very early in the boot
process, information on ZFS mountpoints must be stored separately. The
output of
zfs
list
-Ho
name,⟨every property above in
order⟩
for datasets that should be mounted by systemd should be kept at
/usr/local/etc/zfs/zfs-list.cache/poolname,
and, if writeable, will be kept synchronized for the entire pool by the
history_event-zfs-list-cacher.sh ZEDLET, if enabled
(see
zed(8)).
If the
ZFS_DEBUG
environment variable is nonzero (or unset and
/proc/cmdline contains
"debug"),
print summary accounting information at the end.
To begin, enable tracking for the pool:
# touch
/usr/local/etc/zfs/zfs-list.cache/poolname
Then enable the tracking ZEDLET:
# ln
-s
/usr/local/libexec/zfs/zed.d/history_event-zfs-list-cacher.sh
/usr/local/etc/zfs/zed.d
# systemctl
enable
zfs-zed.service
# systemctl
restart
zfs-zed.service
If no history event is in the queue, inject one to ensure the
ZEDLET runs to refresh the cache file by setting a monitored property
somewhere on the pool:
# zfs
set
relatime=off
poolname/dset
# zfs
inherit
relatime
poolname/dset
To test the generator output:
$ mkdir
/tmp/zfs-mount-generator
$
/zfs-mount-generator
/tmp/zfs-mount-generator
If the generated units are satisfactory, instruct
systemd
to re-run all generators:
# systemctl
daemon-reload