We need to keep the passthru.filesInstalledToEtc and passthru.defaultBlacklistedPlugins in sync with the package contents so let's add a test to enforce that.
Since cd1dedac67 systemd-networkd has it's
netlink socket created via a systemd.socket unit. One might think that
this doesn't make much sense since networkd is just going to create it's
own socket on startup anyway. The difference here is that we have
configuration-time control over things like socket buffer sizes vs
compile-time constants.
For larger setups where networkd has to create a lot of (virtual)
devices the default buffer size of currently 128MB is not enough.
A good example is a machine with >100 virtual interfaces (e.g.,
wireguard tunnels, VLANs, …) that all have to be brought up during
startup. The receive buffer size will spike due to all the generated
message from the new interfaces. Eventually some of the message will be
dropped since there is not enough (permitted) buffer space available.
By having networkd start through / with a netlink socket created by
systemd we can configure the `ReceiveBufferSize` parameter in the socket
options without recompiling networkd.
Since the actual memory requirements depend on hardware, timing, exact
configurations etc. it isn't currently possible to infer a good default
from within the NixOS module system. Administrators are advised to
monitor the logs of systemd-networkd for `rtnl: kernel receive buffer
overrun` spam and increase the memory as required.
Note: Increasing the ReceiveBufferSize doesn't allocate any memory. It
just increases the upper bound on the kernel side. The memory allocation
depends on the amount of messages that are queued on the kernel side of
the netlink socket.
udev gained native support to handle FIDO security tokens, so we don't
need a module which only added the now obsolete udev rules.
Fixes: https://github.com/NixOS/nixpkgs/issues/76482
Upstream has this alias too; so that dbus activation works.
What I don't fully understand is why this would ever be useful given
this unit is already started way in early boot; even before dbus is up.
But lets just keep behaviour similar to upstream and then ask these
questions to upstream.
With this systemd buffers netlink messages in early boot from the kernel
itself; and passes them on to networkd for processing once it's started.
Makes sure no routing messages are missed.
Also makes an alias so that dbus can activate this unit. Upstream has
this too.
This will make dbus socket activation for it work
When `systemd-resolved` is restarted; this would lead to unavailability
of DNS lookups. You're supposed to use DBUS socket activation to buffer
resolved requests; such that restarts happen without downtime
This makes it possible to only start IPFS when needed. So a user’s
IPFS daemon only starts when they actually use it.
A few important warnings though:
- This probably shouldn’t be mixed with services.ipfs.autoMount
since you want /ipfs and /ipns aren’t activated like this
- ipfs.socket assumes that you are using ports 5001 and 8080 for the
API and gateway respectively. We could do some parsing to figure
out what is in apiAddress and gatewayAddress, but that’s kind of
difficult given the nonstandard address format.
- Apparently? this doesn’t work with the --api commands used in the tests.
Of course you can always start automatically with startWhenNeeded =
false, or just running ‘systemctl start ipfs.service’.
Tested with the following test (modified from tests/ipfs.nix):
import ./make-test-python.nix ({ pkgs, ...} : {
name = "ipfs";
nodes.machine = { ... }: {
services.ipfs = {
enable = true;
startWhenNeeded = true;
};
};
testScript = ''
start_all()
machine.wait_until_succeeds("ipfs id")
ipfs_hash = machine.succeed("echo fnord | ipfs add | awk '{ print $2 }'")
machine.succeed(f"ipfs cat /ipfs/{ipfs_hash.strip()} | grep fnord")
'';
})
Fixes#90145
Update nixos/modules/services/network-filesystems/ipfs.nix
Co-authored-by: Florian Klink <flokli@flokli.de>
Previously we had three services for different config flavors. This is
confusing because only one instance of IPFS can run on a host / port
combination at once. So move all into ipfs.service, which contains the
configuration specified in services.ipfs.
Also remove the env wrapper and just use systemd env configuration.