By default the jenkins server is executed under the user "jenkins". Which can be configured using
users.jenkins.* options. If a different user is requested by changing services.jenkins.user then
none of the users.jenkins options apply.
This patch does not include jenkins slave configuration. Some config options will probably change
when this is implemented.
Aspects like the user and environment are typically identical between slave and master. The service
configs are different. The design is for users.jenkins to cover the shared aspects while
services.jenkins and services.jenkins-slave cover the master and slave specific aspects,
respectively.
Another option would be to place everything under services.jenkins and have a config that selects
master vs slave.
* Bump bumblebee to 3.2.1
* Remove config.patch - options it added can be passed to ./configure now
* Remove the provided xorg.conf
Provided xorg.conf was causing problems for some users,
and Bumblebee provides its own default configuration anyway.
* Make secondary X11 log to /var/log/X.bumblebee.log
* Add a module for bumblebee
Includes configuration option for the threshold beneath which to refill
the entropy pool - defaults to 1024 bits as this is the number used in
other distro's existing service files I looked at.
With kmscon, it is now possible to have a system without X that still
needs the mesa setup in /run/opengl-driver
Signed-off-by: Shea Levy <shea@shealevy.com>
This required some changes to systemd unit handling:
* Add an option to specify that a unit is just a symlink
* Allow specified units to overwrite systemd-provided ones
* Have gettys.target require autovt@1.service instead of getty@1.service
Signed-off-by: Shea Levy <shea@shealevy.com>
ntopng is a high-speed web-based traffic analysis and flow collection
tool. Enable it by adding this to configuration.nix:
services.ntopng.enable = true;
Open a browser at http://localhost:3000 and login with the default
username/password: admin/admin.
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.
The attribute ‘config.systemd.services.<service-name>.runner’
generates a script that runs the service outside of systemd. This is
useful for testing, and also allows NixOS services to be used outside
of NixOS. For instance, given a configuration file foo.nix:
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql92;
services.postgresql.dataDir = "/tmp/postgres";
}
you can build and run PostgreSQL as follows:
$ nix-build -A config.systemd.services.postgresql.runner -I nixos-config=./foo.nix
$ ./result
This will run the service's ExecStartPre, ExecStart, ExecStartPost and
ExecStopPost commands in an appropriate environment. It doesn't work
well yet for "forking" services, since it can't track the main
process. It also doesn't work for services that assume they're always
executed by root.