PR #1366
The previous windowManager.xmonad option only starts xmonad and
doesn't make ghc available. This assumes that the user has GHC with
access to the xmonad package in his PATH when using xmonad.
Xmonad in Nix is now patched to accept the XMONAD_{GHC,XMESSAGE}
environment variables which define the path to either ghc or xmessage.
These are set automatically when using xmonad through
windowManager.xmonad.
My (or specific: @aristidb and my) changes make it possible to use
Xmonad without adding GHC to any profile. This is useful if you want
to add a different GHC to your profile.
This commit introduces some options:
- xmonad.haskellPackages: Controls which Haskell package set & GHC set
is used to (re)build Xmonad
- xmonad.extraPackages: Function returning a list of additional
packages to make available to GHC when rebuilding Xmonad
- xmonad.enableContribExtras: Boolean option to build xmonadContrib
and xmonadExtras.
Signed-off-by: Moritz Ulrich <moritz@tarn-vedra.de>
If you want minidla to accept connections from the rest of the world, please
add
networking.firewall.allowedTCPPorts = [ 8200 ];
networking.firewall.allowedUDPPorts = [ 1900 ];
to /etc/nixos/configuration.nix.
See <http://lists.science.uu.nl/pipermail/nix-dev/2013-November/011997.html>
for the discussion that lead to this.
If you want x11vnc to receive TCP connections from the rest of the world,
please add
networking.firewall.allowedTCPPorts = [ 5900 ];
to /etc/nixos/configuration.nix.
See <http://lists.science.uu.nl/pipermail/nix-dev/2013-November/011997.html>
for the discussion that lead to this.
If you want CUPS to receive UDP printer announcements from the rest of the
world, please add
networking.firewall.allowedUDPPorts = [ 631 ];
to /etc/nixos/configuration.nix.
See <http://lists.science.uu.nl/pipermail/nix-dev/2013-November/011997.html>
for the discussion that lead to this.
ntopng is a high-speed web-based traffic analysis and flow collection
tool. Enable it by adding this to configuration.nix:
services.ntopng.enable = true;
Open a browser at http://localhost:3000 and login with the default
username/password: admin/admin.
Postgres was taking a long time to shutdown. This is because we were
sending SIGINT to all processes, apparently confusing the autovacuum
launcher. Instead it should only be sent to the main process (which
takes care of shutting down the others).
The downside is that systemd will also send the final SIGKILL only to
the main process, so other processes in the cgroup may be left behind.
There should be an option for this...
libvirtd puts the full path of the emulator binary in the machine config
file. But this path can unfortunately be garbage collected while still
being used by the virtual machine. Then this happens:
Error starting domain: Cannot check QEMU binary /nix/store/z5c2xzk9x0pj6x511w0w4gy9xl5wljxy-qemu-1.5.2-x86-only/bin/qemu-kvm: No such file or directory
Fix by updating the emulator path on each service startup to something
valid (re-scan $PATH).
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.
The attribute ‘config.systemd.services.<service-name>.runner’
generates a script that runs the service outside of systemd. This is
useful for testing, and also allows NixOS services to be used outside
of NixOS. For instance, given a configuration file foo.nix:
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql92;
services.postgresql.dataDir = "/tmp/postgres";
}
you can build and run PostgreSQL as follows:
$ nix-build -A config.systemd.services.postgresql.runner -I nixos-config=./foo.nix
$ ./result
This will run the service's ExecStartPre, ExecStart, ExecStartPost and
ExecStopPost commands in an appropriate environment. It doesn't work
well yet for "forking" services, since it can't track the main
process. It also doesn't work for services that assume they're always
executed by root.