Introduces a trim timer similar to the fstrim service.
According to zpool(8) for consumer hardware periodic manual TRIM
is preferred over automatic TRIM that ZFS implements.
The period of one week is based on recommendations of fstrim.
Same as zfsUnstable for the moment.
We still keep the zfsUnstable expression as we likely
need it in the near future again.
Also remove spl since it is no longer needed.
fixes#41838
At the moment it works fine for "file://" keys, but does not work for
dataPools with "prompt" keys, because the passphrase cannot be entered
(yet).
Users were confused that the error message said config.networking.hostId, and indeed that did nothing to fix their problem.
Update the error message to specify the option they should actually set.
Currently, ecryptfs support is coupled to `security.pam.enableEcryptfs`, but one
might want to use ecryptfs without enabling the PAM functionality. This commit
splits it out into a `boot.supportedFilesystems` switch.
Currently the `rpc-gssd.service` has a `ConditionPathExists` clause that can
never be met, because it's looking for stateful data inside `/nix/store`.
`auth-rpcgss-module.service` also only starts if this file exists.
FixesNixOS/nixpkgs#29509.
Until now nixos only delivered the latest zfs release. This release is often not
compatible with the latest mainline kernel. Therefor an unstable variant is
added, which might be based on testing releases or git revisions.
fixes#21359
The most complex problems were from dealing with switches reverted in
the meantime (gcc5, gmp6, ncurses6).
It's likely that darwin is (still) broken nontrivially.
If you power off your machine frequently, you could miss the execution
of some snapshots.
This is more troublesome the more infrequently the snapshots are
triggered. For example, monthly snapshots only execute at exactly
midnight on the first day of the month. If you only have your
machine powered on at that time with probability 50%, then half the
snapshots won't be triggered.
This means that if you wanted to keep 3 monthly snapshots, then instead
of keeping 3 months' worth of snapshotted data as you expected, you would
end up with snapshots spanning back 6 months.
Adding the "Persistent = yes" option to auto-snapshot timer units makes
a missed snapshot execute when booting up the machine.
Otherwise, in certain cases, snapshots of infrequently-modified
filesystems can be kept for a much longer time than the user would
normally expect, and cause a large amount of extra disk space to be
consumed.
Also added flag to snapshot filesystems in parallel by default.
I've also added a configuration option for zfs-auto-snapshot flags, so
that the user can override them.
For example, the user may want to append --utc to the list of default
options, so that the snapshot names don't cause name conflicts or
apparent time reversals due to daylight savings or timezone changes.
The old boot.spl.hostid option was not working correctly due to an
upstream bug.
Instead, now we will create the /etc/hostid file so that all applications
(including the ZFS kernel modules, ZFS user-space applications and other
unrelated programs) pick-up the same system-wide host id. Note that glibc
(and by extension, the `hostid` program) also respect the host id configured in
/etc/hostid, if it exists.
The hostid option is now mandatory when using ZFS because otherwise, ZFS will
require you to force-import your ZFS pools if you want to use them, which is
undesirable because it disables some of the checks that ZFS does to make sure it
is safe to import a ZFS pool.
The /etc/hostid file must also exist when booting the initrd, before the SPL
kernel module is loaded, so that ZFS picks up the hostid correctly.
The complexity in creating the /etc/hostid file is due to having to
write the host ID as a 32-bit binary value, taking into account the
endianness of the machine, while using only shell commands and/or simple
utilities (to avoid exploding the size of the initrd).
It turns out that the upstream systemd services that import ZFS pools contain
serious bugs. The first major problem is that importing pools fails if there
are no pools to import. The second major problem is that if a pool ends up in
/etc/zfs/zpool.cache but it disappears from the system (e.g. if you
reboot but during the reboot you unplug your ZFS-formatted USB pen drive),
then the import service will always fail and it will be impossible to get rid
of the pool from the cache (unless you manually delete the cache).
Also, the upstream service would always import all available ZFS pools every
boot, which may not be what is desired in some cases.
This commit will solve these problems in the following ways:
1. Ignore /etc/zfs/zpool.cache. This seems to be a major source of
issues, and also does not play well with NixOS's philosophy of
reproducible configurations. Instead, on every boot NixOS will try to import
the set of pools that are specified in its configuration. This is also the
direction that upstream is moving towards.
2. Instead of trying to import all ZFS pools, only import those that are
actually necessary. NixOS will automatically determine these from the
config.fileSystems.* option. Also, the user can import any additional
pools every boot by adding them to the config.boot.zfs.extraPools
option, but this is only necessary if their filesystems are not
specified in config.fileSystems.*.
3. Added options to configure if ZFS should force-import ZFS pools. This may
currently be necessary, especially if your pools have not been correctly
imported with a proper host id configuration (which is probably true for 99% of
current NixOS ZFS users). Once host id configuration becomes mandatory when
using ZFS in NixOS and we are sure that most users have updated their
configurations and rebooted at least once, we should disable force-import by
default. Probably, this shouldn't be done before the next stable release.
WARNING: This commit may change the order in which your non-ZFS vs ZFS
filesystems are mounted. To avoid this problem (now or in the future)
it is recommended that you set the 'mountpoint' property of your ZFS
filesystems to 'legacy', and that you manage them using
config.fileSystems, just like any other non-ZFS filesystem is usually
managed in NixOS.
Also remove custom zfs services from NixOS. This makes NixOS more aligned with
upstream.
More importantly, it prepares the way for NixOS to use ZED (the ZFS event
daemon). This service will automatically be enabled but it is not possible to
configure it via configuration.nix yet.
I'm not using JFS, but this is to mainly make jfsutils available if you
have defined a JFS filesystem in your configuration.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This reverts commit 469f22d717, reversing
changes made to 0078bc5d8f.
Conflicts:
nixos/modules/installer/tools/nixos-generate-config.pl
nixos/modules/system/boot/loader/grub/install-grub.pl
nixos/release.nix
nixos/tests/installer.nix
I tried to keep apparently-safe code in conflicts.
Upstream has not been tagging new versions for a long time, but we need
compatibility with newer kernels. The 0.6.2 versions already have a bunch of
backported compatibility patches, but 3.14 kernels need even more.
Also, the git versions have fixed a bunch of crashes and other bugs, so perhaps
we should just bite the bullet and just use recent git versions (as sometimes
upstream recommends, when people run into bugs).
This adds a new "boot.zfs.useGit" boolean option, so that a user can
easily opt into using the git versions.
Using pkgs.lib on the spine of module evaluation is problematic
because the pkgs argument depends on the result of module
evaluation. To prevent an infinite recursion, pkgs and some of the
modules are evaluated twice, which is inefficient. Using ‘with lib’
prevents this problem.
(systemd service descriptions that is, not service descriptions in "man
configuration.nix".)
Capitalizing each word in the description seems to be the accepted
standard.
Also shorten these descriptions:
* "Munin node, the agent process" => "Munin Node"
* "Planet Venus, an awesome ‘river of news’ feed reader" => "Planet Venus Feed Reader"