At one point in my configuration I had:
boot.kernel.sysctl = {
# https://unix.stackexchange.com/questions/13019/description-of-kernel-printk-values
"kernel.printk" = "4 4 1 7";
};
which triggered:
error: The unique option `boot.kernel.sysctl.kernel.printk' is defined multiple times, in `/home/teto/dotfiles/nixpkgs/mptcp-unstable.nix' and `/home/teto/nixpkgs/nixos/modules/system/boot/kernel.nix'.
(use ‘--show-trace’ to show detailed location information)
Traceback (most recent call last):
File "/home/teto/nixops/scripts/nixops", line 984, in <module>
args.op()
File "/home/teto/nixops/scripts/nixops", line 406, in op_deploy
max_concurrent_activate=args.max_concurrent_activate)
File "/home/teto/nixops/nixops/deployment.py", line 1045, in deploy
self.run_with_notify('deploy', lambda: self._deploy(**kwargs))
File "/home/teto/nixops/nixops/deployment.py", line 1034, in run_with_notify
f()
File "/home/teto/nixops/nixops/deployment.py", line 1045, in <lambda>
self.run_with_notify('deploy', lambda: self._deploy(**kwargs))
File "/home/teto/nixops/nixops/deployment.py", line 985, in _deploy
self.configs_path = self.build_configs(dry_run=dry_run, repair=repair, include=include, exclude=exclude)
File "/home/teto/nixops/nixops/deployment.py", line 653, in build_configs
raise Exception("unable to build all machine configurations")
Exception: unable to build all machine configurations
This simple addition allows to override it.
* the keyboard modules in all-hardware.nix are already defaults of
boot.initrd.availableKernelModules
* ide modules, hid_lenovo_tpkbd and scsi_wait_scan have been removed
because they're not available anymore
* i8042 was a duplicate (see few lines abowe)
For some reason, between Linux 4.4.19 and 4.4.20, the atkbd and libps2
kernel modules lost their dependency on i8042 in modules.dep, causing
i8042 not to be included in the initrd. This breaks keyboard in the
initrd, in turn breaking LUKS.
This only happens on the 16.03 branch; on 16.09, it appears i8042 is
pulled into the initrd anyway (through some other dependency,
presumably). But let's include it explicitly.
http://hydra.nixos.org/build/40468431
This makes it easy to specify kernel patches:
boot.kernelPatches = [ pkgs.kernelPatches.ubuntu_fan_4_4 ];
To make the `boot.kernelPatches` option possible, this also makes it
easy to extend and/or modify the kernel packages within a linuxPackages
set. For example:
pkgs.linuxPackages.extend (self: super: {
kernel = super.kernel.override {
kernelPatches = super.kernel.kernelPatches ++ [
pkgs.kernelPatches.ubuntu_fan_4_4
];
};
});
Closes #15095
- add missing types in module definitions
- add missing 'defaultText' in module definitions
- wrap example with 'literalExample' where necessary in module definitions
This hopefully fixes intermittent initrd failures where udevd cannot
create a Unix domain socket:
machine# running udev...
machine# error getting socket: Address family not supported by protocol
machine# error initializing udev control socket
machine# error getting socket: Address family not supported by protocol
The "unix" kernel module is supposed to be loaded automatically, and
clearly that works most of the time, but maybe there is a race
somewhere. In any case, no sane person would run a kernel without Unix
domain sockets, so we may as well make it builtin.
http://hydra.nixos.org/build/30001448
... because we make it built-in by default.
I can't imagine anyone who wanted to purge this module from his/her system,
so let's keep it simple, at least for now.
mmc_block and sdhci_acpi are both necessary for a Bay Trail Chromebook with an
internal eMMC drive. The sdhci_acpi module is detectable but I can not figure
out a way to check whether the mmc_block module is needed by just looking at
/sys/
If you define a unit, and either systemd or a package in
systemd.packages already provides that unit, then we now generate a
file /etc/systemd/system/<unit>.d/overrides.conf. This makes it
possible to use upstream units, while allowing them to be customised
from the NixOS configuration. For instance, the module nix-daemon.nix
now uses the units provided by the Nix package. And all unit
definitions that duplicated upstream systemd units are finally gone.
This makes the baseUnit option unnecessary, so I've removed it.
This creates static device nodes such as /dev/fuse or
/dev/snd/seq. The kernel modules for these devices will be loaded on
demand when the device node is opened.
Using pkgs.lib on the spine of module evaluation is problematic
because the pkgs argument depends on the result of module
evaluation. To prevent an infinite recursion, pkgs and some of the
modules are evaluated twice, which is inefficient. Using ‘with lib’
prevents this problem.
We used to have the configuration of the kernel available in a
somewhat convenient place (/run/booted-system/kernel-modules/config)
but that has disappeared. So instead just make /proc/configs.gz
available. It only eats a few kilobytes.
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.