- Actually use the zfsSupport option
- Add documentation URI to lxd.service
- Add lxd.socket to enable socket activatation
- Add proper dependencies and remove systemd-udev-settle from lxd.service
- Set up /var/lib/lxc/rootfs using systemd.tmpfiles
- Configure safe start and shutdown of lxd.service
- Configure restart on failures of lxd.service
Launching a container with a private network requires creating a
dedicated networking interface for it; name of that interface is derived
from the container name itself - e.g. a container named `foo` gets
attached to an interface named `ve-foo`.
An interface name can span up to IFNAMSIZ characters, which means that a
container name must contain at most IFNAMSIZ - 3 - 1 = 11 characters;
it's a limit that we validate using a build-time assertion.
This limit has been upgraded with Linux 5.8, as it allows for an
interface to contain a so-called altname, which can be much longer,
while remaining treated as a first-class citizen.
Since altnames have been supported natively by systemd for a while now,
due diligence on our side ends with dropping the name-assertion on newer
kernels.
This commit closes#38509.
systemd/systemd#14467systemd/systemd#17220https://lwn.net/Articles/794289/
Ensure that the HyperV keyboard driver is available in the early
stages of the boot process. This allows the user to enter a disk
encryption passphrase or repair a boot problem in an interactive
shell.
We now set the hooks dir correctly if the OCI hook is enabled. CRI-O
supports this specific hook from v1.20.0.
Signed-off-by: Sascha Grunert <mail@saschagrunert.de>
Reintroduce the `fetch-ssh-keys` service so that GCE images that work
with NixOps can once again be built. Also, reformat the code a bit.
The service was removed in 88570538b3,
likely due to a comment saying it should be removed. It was still
needed for images to work with NixOps, however, and probably needed to
be replaced or rewritten rather than removed.
The socketActivation option was removed, but later on socket activation
was added back without the option to disable it. The description now reflects
that socket activation is used unconditionally in the current setup.
Since the introduction of option `containers.<name>.pkgs`, the
`nixpkgs.*` options (including `nixpkgs.pkgs`, `nixpkgs.config`, ...) were always
ignored in container configs, which broke existing containers.
This was due to `containers.<name>.pkgs` having two separate effects:
(1) It sets the source for the modules that are used to evaluate the container.
(2) It sets the `pkgs` arg (`_module.args.pkgs`) that is used inside the container
modules.
This happens even when the default value of `containers.<name>.pkgs` is unchanged, in which
case the container `pkgs` arg is set to the pkgs of the host system.
Previously, the `pkgs` arg was determined by the `containers.<name>.config.nixpkgs.*` options.
This commit reverts the breaking change (2) while adding a backwards-compatible way to achieve (1).
It removes option `pkgs` and adds option `nixpkgs` which implements (1).
Existing users of `pkgs` are informed by an error message to use option
`nixpkgs` or to achieve only (2) by setting option `containers.<name>.config.nixpkgs.pkgs`.
It's been 8.5 years since NixOS used mingetty, but the option was
never renamed (despite the file definining the module being renamed in
9f5051b76c ("Rename mingetty module to agetty")).
I've chosen to rename it to services.getty here, rather than
services.agetty, because getty is implemantation-neutral and also the
name of the unit that is generated.
... build-vm-with-bootloader" for EFI systems
This reverts commit 20257280d9, reversing
changes made to 926a1b2094.
It broke nixosTests.installer.simpleUefiSystemdBoot
and right now channel is lagging behing for two weeks.
`nixos-rebuild build-vm-with-bootloader` currently fails with the
default NixOS EFI configuration:
$ cat >configuration.nix <<EOF
{
fileSystems."/".device = "/dev/sda1";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
}
EOF
$ nixos-rebuild build-vm-with-bootloader -I nixos-config=$PWD/configuration.nix -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/nixos-20.09.tar.gz
[...]
insmod: ERROR: could not insert module /nix/store/1ibmgfr13r8b6xyn4f0wj115819f359c-linux-5.4.83/lib/modules/5.4.83/kernel/fs/efivarfs/efivarfs.ko.xz: No such device
mount: /sys/firmware/efi/efivars: mount point does not exist.
[ 1.908328] reboot: Power down
builder for '/nix/store/dx2ycclyknvibrskwmii42sgyalagjxa-nixos-boot-disk.drv' failed with exit code 32
[...]
Fix it by setting virtualisation.useEFIBoot = true in qemu-vm.nix, when
efi is needed.
And remove the now unneeded configuration in
./nixos/tests/systemd-boot.nix, since it's handled globally.
Before:
* release-20.03: successful build, unsuccessful run
* release-20.09 (and master): unsuccessful build
After:
* Successful build and run.
Fixes https://github.com/NixOS/nixpkgs/issues/107255
Fixes that `containers.<name>.extraVeths.<name>` configuration was not
always applied.
When configuring `containers.<name>.extraVeths.<name>` and not
configuring one of `containers.<name>.localAddress`, `.localAddress6`,
`.hostAddress`, `.hostAddress6` or `.hostBridge` the veth was created,
but otherwise no configuration (i.e. no ip) was applied.
nixos-container always configures the primary veth (when `.localAddress`
or `.hostAddress` is set) to be the containers default gateway, so
this fix is required to create a veth in containers that use a different
default gateway.
To test this patch configure the following container and check if the
addresses are applied:
```
containers.testveth = {
extraVeths.testveth = {
hostAddress = "192.168.13.2";
localAddress = "192.168.13.1";
};
config = {...}:{};
};
```