Previously, depending on the environment and the type of interface that
was created, the configured IPs of an interface wouldn't be applied on a
nixos-rebuild switch. It works after a reboot.
This patch ensures that the network-addresses service is started
either via the network-link service or if the networking target is
activated (i.e. on system activation).
Fixes #28474#16230.
* gnome3: only maintain single GNOME 3 package set
GNOME 3 was split into 3.10 and 3.12 in #2694. Unfortunately, we barely have the resources
to update a single version of GNOME. Maintaining multiple versions just does not make sense.
Additionally, it makes viewing history using most Git tools bothersome.
This commit renames `pkgs/desktops/gnome-3/3.24` to `pkgs/desktops/gnome-3`, removes
the config variable for choosing packageset (`environment.gnome3.packageSet`), updates
the hint in maintainer script, and removes the `gnome3_24` derivation from `all-packages.nix`.
Closes: #29329
* maintainers/scripts/gnome: Use fixed GNOME 3 directory
Since we now allow only a single GNOME 3 package set, specifying
the working directory is not necessary.
This commit sets the directory to `pkgs/desktops/gnome-3`.
- add flannel support
- remove deprecated authorizationRBACSuperAdmin option
- rename from deprecated poratalNet to serviceClusterIpRange
- add nodeIp option for kubelet
- kubelet, add br_netfilter to kernelModules
- enable firewall by default
- enable dns by default on node and on master
- disable iptables for docker by default on nodes
- dns, restart on failure
- update tests
and other minor changes
1. The chmod 400 with the preset cookie prevented restarts, as
on the second boot it would fail to write to the cookie. Oops.
2. As far as I can tell, sasl logs were disabled because of the
following error:
{error,{cannot_log_to_tty,sasl_report_tty_h,not_installed}}
Not because we actually wanted to disable them. This meant the
management plugin wasn't usable due to a bug set to be fixed in
3.7.0.
Add another option for debugging instead. Lots of users have been
complaining about this default behaviour.
This patch also cleans up the EFI bootloader entries in the ISO.
This has been broken nearly all the time due to the patches needed to
iproute2 not being compatible with the newer versions we have been
shipping. As long as Ubuntu does not manage to upstream these changes
so they are maintained with iproute2 and we don't have a maintainer
updating these patches to new iproute2 versions it is not feasible to
have this available.
This reverts commit 670b4e29ad. The change
added in this commit was controversial when it was originally suggested
in https://github.com/NixOS/nixpkgs/pull/29205. Then that PR was closed
and a new one opened, https://github.com/NixOS/nixpkgs/pull/29503,
effectively circumventing the review process. I don't agree with this
modification. Adding an option 'resolveLocalQueries' to tell the locally
running name server that it should resolve local DNS queries feels
outright nuts. I agree that the current state is unsatisfactory and that
it should be improved, but this is not the right way.
(cherry picked from commit 23a021d12e)
Ensure that modules required by all declared fileSystems are explicitly
loaded. A little ugly but fixes the deferred mount test.
See also https://github.com/NixOS/nixpkgs/issues/29019
This includes fuse-common (fusePackages.fuse_3.common) as recommended by
upstream. But while fuse(2) and fuse3 would normally depend on
fuse-common we can't do that in nixpkgs while fuse-common is just
another output from the fuse3 multiple-output derivation (i.e. this
would result in a circular dependency). To avoid building fuse3 twice I
decided it would be best to copy the shared files (i.e. the ones
provided by fuse(2) and fuse3) from fuse-common to fuse (version 2) and
avoid collision warnings by defining priorities. Now it should be
possible to install an arbitrary combination of "fuse", "fuse3", and
"fuse-common" without getting any collision warnings. The end result
should be the same and all changes should be backwards compatible
(assuming that mount.fuse from fuse3 is backwards compatible as stated
by upstream [0] - if not this might break some /etc/fstab definitions
but that should be very unlikely).
My tests with sshfs (version 2 and 3) didn't show any problems.
See #28409 for some additional information.
[0]: https://github.com/libfuse/libfuse/releases/tag/fuse-3.0.0
The getty@.service unit already has an ExecStart so we cannot simply set a new
one in order to override it or we will get this error:
systemd[1]: getty@tty1.service: Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.
Instead "reset" ExecStart by setting it to empty which is the systemd way of
doing it.
Currently the `rpc-gssd.service` has a `ConditionPathExists` clause that can
never be met, because it's looking for stateful data inside `/nix/store`.
`auth-rpcgss-module.service` also only starts if this file exists.
Fixes NixOS/nixpkgs#29509.
When the user specifies the networking.nameservers setting in the
configuration file, it must take precedence over automatically
derived settings.
The culprit was services.bind that made the resolver set to
127.0.0.1 and ignore the nameserver setting.
This patch adds a flag to services.bind to override the nameserver
to localhost. It defaults to true. Setting this to false prevents the
service.bind and dnsmasq.resolveLocalQueries settings from
overriding the users' settings.
Also, when the user specifies a domain to search, it must be set in
the resolver configuration, even if the user does not specify any
nameservers.
(cherry picked from commit 670b4e29ad)
This commit was accidentally merged to 17.09 but was intended for
master. This is the cherry-pick to master.
Previously services depending on network-online.target would wait until
dhcpcd times out if it was enabled and a static network address
configuration was used. Setting the default gateway statically is enough
for the networking to be considered online.
This also adjusts the relevant networking tests to wait for
network-online.target instead of just network.target.
If neither database.password or database.passwordFile were provided,
it would try and fail to coerce null to a string.
This fixes the situation where there is no password for the database.
Resolves #27950
This option got introduced in 7904499542
and it didn't check whether mailUser and mailGroup are null, which they
are by default.
Now we're only creating the user if createMailUser is set in conjunction
with mailUser and the group if mailGroup is set as well.
I've added a NixOS VM test so that we can verify whether dovecot works
without any additional options set, so it serves as a regression test
for issue #29466 and other issues that might come up with future changes
to the Dovecot service.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Fixes: #29466
Cc: @qknight, @abbradar, @ixmatus, @siddharthist
the systemd.unit(5) discussion of wantedBy and requiredBy is in the
[Install] section, and thus focused on stateful 'systemctl enable'.
so, clarify that in NixOS, wantedBy & requiredBy are still what most
users want, and not to be confused with enabled.
Arguably, breaking linux-latest should not block a release. Also, booting
the kernel + basic sanity checking is implicitly exercised by every other
vm test.
For various reasons, big Nix attrsets look ugly in the generated manual
page[1]. Use literalExample to fix it.
[1] Quotes around attribute names are lost, newlines inside multi-line
strings are shown as '\n' and attrs written on multiple lines are joined
into one.
Quoting from @FRidh:
Note overridePythonAttrs exists since 17.09. It overrides the call to
buildPythonPackage.
While it's not strictly necessary to do this, because postPatch ends up
in drvAttrs anyway, it's probably better to use overridePythonAttrs so
we don't run into problems when the underlying implementation of
buildPythonPackage changes.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Boot fails when a keyfile is configured for all encrypted filesystems
and no other luks devices are configured. This is because luks support is only
enabled in the initrd, when boot.initrd.luks.devices has entries. When a
fileystem has a keyfile configured though, it is setup by a custom
command, not by boot.initrd.luks.
This commit adds an internal config flag to enable luks support in the
initrd file, even if there are no luks devices configured.
Since 67651d80bc the requests package now
depends on certifi, which in turn provides the CA root certificates that
we need to replace.
It might also be a good idea to actually patch certifi with our version
of cacert by default so that if we want to override and/or add something
we only need to do it once.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @fpletz, @k0ral, @FRidh
The enableSSL option has been deprecated in
a912a6a291, so we switch to using onlySSL.
I've also explicitly disabled enableACME, because this is the default
and we don't actually want to have ACME enabled for a host which runs an
actual ACME server.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The test here is pretty basic and only tests nginx, but it should get us
started to write tests for different webservers and different ACME
implementations.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
These modules implement a way to test ACME based on a test instance of
Letsencrypt's Boulder service. The service implementation is in
letsencrypt.nix and the second module (resolver.nix) is a support-module
for the former, but can also be used for tests not involving ACME.
The second module provides a DNS server which hosts a root zone
containing all the zones and /etc/hosts entries (except loopback) in the
entire test network, so this can be very useful for other modules that
need DNS resolution.
Originally, I wrote these modules for the Headcounter deployment, but
I've refactored them a bit to be generally useful to NixOS users. The
original implementation can be found here:
https://github.com/headcounter/deployment/tree/89e7feafb/modules/testing
Quoting parts from the commit message of the initial implementation of
the Letsencrypt module in headcounter/deployment@95dfb31110:
This module is going to be used for tests where we need to
impersonate an ACME service such as the one from Letsencrypt within
VM tests, which is the reason why this module is a bit ugly (I only
care if it's working not if it's beautiful).
While the module isn't used anywhere, it will serve as a pluggable
module for testing whether ACME works properly to fetch certificates
and also as a replacement for our snakeoil certificate generator.
Also quoting parts of the commit where I have refactored the same module
in headcounter/deployment@85fa481b34:
Now we have a fully pluggable module which automatically discovers
in which network it's used via the nodes attribute.
The test environment of Boulder used "dns-test-srv", which is a fake
DNS server that's resolving almost everything to 127.0.0.1. On our
setup this is not useful, so instead we're now running a local BIND
name server which has a fake root zone and uses the mentioned node
attribute to automatically discover other zones in the network of
machines and generate delegations from the root zone to the
respective zones with the primaryIPAddress of the node.
...
We want to use real letsencrypt.org FQDNs here, so we can't get away
with the snakeoil test certificates from the upstream project but
now roll our own.
This not only has the benefit that we can easily pass the snakeoil
certificate to other nodes, but we can (and do) also use it for an
nginx proxy that's now serving HTTPS for the Boulder web front end.
The Headcounter deployment tests are simulating a production scenario
with real IPs and nameservers so it won't need to rely on
networking.extraHost. However in this implementation we don't
necessarily want to do that, so I've added auto-discovery of
networking.extraHosts in the resolver module.
Another change here is that the letsencrypt module now falls back to
using a local resolver, the Headcounter implementation on the other hand
always required to add an extra test node which serves as a resolver.
I could have squashed both modules into the final ACME test, but that
would make it not very reusable, so that's the main reason why I put
these modules in tests/common.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
10k staging builds are not yet finished on Hydra (mostly darwin),
but we now have a 20k jobs rebuilding directly on master, so we would
never get to merge this way...
It doesn't look good when the initial admin user is named
"<hash>-gitolite-admin" and the key stored as
"<hash>-gitolite-admin.pub". Instead, make it simply "gitolite-admin"
and "gitolite-admin.pub".
* prometheus-collectd-exporter service: init module
Supports JSON and binary (optional) protocol
of collectd.
* nixos/prometheus-collectd-exporter: submodule is not needed for collectdBinary
There are currently two ways to build Openstack image. This just picks
best of both, to keep only one!
- Image is resizable
- Cloudinit is enable
- Password authentication is disable by default
- Use the same layer than other image builders (ec2, gce...)
- sysctl is new and never succeeded on i686-linux
> cannot stat /proc/sys/net/core/bpf_jit_enable: No such file or directory
- testing plasma5 on i686 would defeat part of the reason why we ended
supporting i686 (lots of stuff built on Hydra)
Update physlock to a more current version which supports PAM and
systemd-logind. Amongst others, this should work now with the slim
login manager without any additional configuration, because it does
not rely on the utmp mechanism anymore.
The section was strange to read, as the initial example already used
`listOf' which is mentioned in the very first paragraph. Then you read
in a subsection about `listOf' and the exact same example is given
once again.
This file was removed in 6f0b538044, but sufficient care was not taken
to remove all references to it. Without this change, trying to
rebuild nixos fails.