Introduce an AWS EC2 AMI which supports aarch64 and x86_64 with a ZFS
root.
This uses `make-zfs-image` which implies two EBS volumes are needed
inside EC2, one for boot, one for root. It should not matter which
is identified `xvda` and which is `xvdb`, though I have always
uploaded `boot` as `xvda`.
This allows us to continue to have the automatically sized image attempt
to build on Hydra, which should give us a good indication of when we've
got this fixed.
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
This, paired with the previous commit, ensures the channel won't be held
back from a kernel upgrade and a non-building sd image, while still
having a new-kernel variant available.
This will make the list much easier to re-use, eg. for `nixosTests`
The drawback is that this approaches makes the
```
nix-build release.nix -A tests.opensmtpd.x86_64-linux
```
command about twice as slow (3s to 6s): it now has to evaluate `nixpkgs`
once for each architecture, instead of just having the hardcoded list of
tests that allowed to say “ok just evaluate for x86_64-linux”.
On the other hand, complete evaluation of `release.nix` should be much
faster because we no longer import `nixpkgs` for each test: testing with
the following command went from 30s to 18s, and that's just for a few
tests.
```
time nix-instantiate --eval --strict nixos/release.nix -A tests.nat
```
I initially wanted to test on the whole `release.nix`, but there are too
many broken tests and it takes too long to eval them all, especially
compared to the fact that the current implementation breaks some setup.
Given developers can just `nix-build nixos/tests/my-test.nix`, it sounds
like an overall win.
This module permits to preload Docker image in a VM in order to reduce
OIs on file copies. This module has to be only used in testing
environments, when the test requires several Docker images such as in
Kubernetes tests. In this case,
`virtualisation.dockerPreloader.images` can replace the
`services.kubernetes.kubelet.seedDockerImages` options.
The idea is to populate the /var/lib/docker directory by mounting qcow
files (we uses qcow file to avoid permission issues) that contain images.
For each image specified in
config.virtualisation.dockerPreloader.images:
1. The image is loaded by Docker in a VM
2. The resulting /var/lib/docker is written to a QCOW file
This set of QCOW files can then be used to populate the
/var/lib/docker:
1. Each QCOW is mounted in the VM
2. Symlink are created from these mount points to /var/lib/docker
3. A /var/lib/docker/image/overlay2/repositories.json file is generated
4. The docker daemon is started.
* journald: forward message to syslog by default if a syslog implementation is installed
* added a test to ensure rsyslog is receiving messages when expected
* added rsyslogd tests to release.nix
Because when I see "config.system.build.manual.manual" after I forgot
what it means I ask "Why do I need that second `.manual` there again?".
Doesn't happen with `config.system.build.manual.manualHTML`.