All options within geoclue.conf[0] have been made configurable.
Additonally, we can now specify whether or not GeoClue
should ask the agent to authorize an application like so:
```
services.geoclue2.appConfig."redshift" = {
isAllowed = true;
isSystem = true;
};
```
[0]: https://gitlab.freedesktop.org/geoclue/geoclue/blob/2.5.2/data/geoclue.conf.in
Co-authored-by: worldofpeace <worldofpeace@protonmail.ch>
This was a testing oversight that came from #61009 -- I forgot to test
the new traceFormat option with older server versions while I was
working on FDB 6.1.
Since trace_format is only available in 6.1+, emitting it
unconditionally caused older versions of the database fail to start,
reporting an error. We simply gate it behind a version check instead,
and assert the format is always XML on older versions. This avoids the
case where the user has an old version, changes traceFormat willingly,
and then is confused by why it didn't work.
As reported by @TimothyKlim in the comments on commit
c55b9236f0. See
c55b9236f0 (r33566132)
Signed-off-by: Austin Seipp <aseipp@pobox.com>
* Don't use `literalExample`, raw Nix values can directly be specified
as an option example which provides support for highlighting in the
manual as well.
* Escape shell args for `extraOptions`: I.e. the `-n` option might be
problematic as a longer notification command might be misinterpreted.
The module installs `zmap` globally and links the config files to
`/etc/zmap`, the default location of config files for zmap.
The package provides pretty much a sensitive default, custom configs can
be created like this:
```
{ lib, ... }:
{
environment.etc."zmap/blacklist.conf" = lib.mkForce {
text = ''
# custom zmap blacklist
0.0.0.0/0
'';
};
}
```
Quite some fixing was needed to get this to work.
Changes in VirtualBox and additions:
- VirtualBox is no longer officially supported on 32-bit hosts so i686-linux is removed from platforms
for VirtualBox and the extension pack. 32-bit additions still work.
- There was a refactoring of kernel module makefiles and two resulting bugs affected us which had to be patched.
These bugs were reported to the bug tracker (see comments near patches).
- The Qt5X11Extras makefile patch broke. Fixed it to apply again, making the libraries logic simpler
and more correct (it just uses a different base path instead of always linking to Qt5X11Extras).
- Added a patch to remove "test1" and "test2" kernel messages due to forgotten debugging code.
- virtualbox-host NixOS module: the VirtualBoxVM executable should be setuid not VirtualBox.
This matches how the official installer sets it up.
- Additions: replaced a for loop for installing kernel modules with just a "make install",
which seems to work without any of the things done in the previous code.
- Additions: The package defined buildCommand which resulted in phases not running, including RUNPATH
stripping in fixupPhase, and installPhase was defined which was not even run. Fixed this by
refactoring using phases. Had to set dontStrip otherwise binaries were broken by stripping.
The libdbus path had to be added later in fixupPhase because it is used via dlopen not directly linked.
- Additions: Added zlib and libc to patchelf, otherwise runtime library errors result from some binaries.
For some reason the missing libc only manifested itself for mount.vboxsf when included in the initrd.
Changes in nixos/tests/virtualbox:
- Update the simple-gui test to send the right keys to start the VM. With VirtualBox 5
it was enough to just send "return", but with 6 the Tools thing may be selected by
default. Send "home" to reliably select Tools, "down" to move to the VM and "return"
to start it.
- Disable the VirtualBox UART by default because it causes a crash due to a regression
in VirtualBox (specific to software virtualization and serial port usage). It can
still be enabled using an option but there is an assert that KVM nested virtualization
is enabled, which works around the problem (see below).
- Add an option to enable nested KVM virtualization, allowing VirtualBox to use hardware
virtualization. This works around the UART problem and also allows using 64-bit
guests, but requires a kernel module parameter.
- Add an option to run 64-bit guests. Tested that the tests pass with that. As mentioned
this requires KVM nested virtualization.
Currently, this uses the somewhat crude method of setting LD_PRELOAD in the
system environment. This works, but should be considered a stepping stone to
a more robust solution.
following up #59148
I forgot the default case of the architectures which do not have minor brothers whose code they can run ("westmere" or any of of AMD)
I was pointed towards a small syntax error in the `nixpkgs.overlays`
documentation. There was a trailing semicolon after the overlay
function.
I also aligned the code a bit better so opening and closing brackets can
be visually matched much better (IMO).
https://humdi.net/vnstat/CHANGES
* enable tests
* add hardening options from upstream's
example service
* fix "documentation" setting in service:
either needs to be `unitConfig.Documentation`
(uppercase) or lowercase but not within unitConfig.
Previously, if you, for example, set
services.xserver.displayManager.sddm.enable, but forgot to set
services.xserver.enable, you would get an error message that looked like
this:
error: attribute 'display-manager' missing
Which was not particularly helpful.
Using assertions, we can make this message much better.
The type of ZNC's config option specifies that a configuration like
config.User.paul = null;
should be valid, which is useful for clearing/disabling property sets
like Users and Networks. However until now the config generator
implementation didn't actually cover null values, meaning you'd get an
error like
error: value is null while a set was expected, at /foo.nix:29:10
This fixes the implementation to correcly allow clearing of property
sets.
The kubeconfig provided to the kubernetes-control-plane-online.service
is invalid. However, the apiserver /healthz endpoint can be accessed without auth so it's
simpler to just use curl for that.
The two directories KDB and PTree do not exist before the SKS DB is
build for the first time. If /var/db/sks is empty and the module is
enabled via "services.sks.enable = true;" the following error will
occur:
...-unit-script-sks-db-pre-start[xxx]:
ln: failed to create symbolic link 'KDB/DB_CONFIG': No such file or directory
To avoid this both links have to be created after the DB is build.
Note: Creating the directories manually might be better but the initial
build might be skipped as a result:
unit-script-sks-db-pre-start[xxxxx]: KeyDB directory already exists. Exiting.
unit-script-sks-db-pre-start[xxxxx]: PTree directory already exists. Exiting.
This change was only a temporary workaround and isn't required anymore,
since /etc/systemd/system/system.slice should not be present on any
recent NixOS system (which makes this change a no-op).
This reverts commit 7098b0fcdf.
This change will load all configuration files from /etc, to make it easy
to override them, but fallback to /nix/store/.../etc/sway/config to make
Sway work out-of-the-box with the default configuration on non NixOS
systems.
Unfortunately the changes in ab5dcc7068
introduced a typo (took me a while to spot that...) that broke the
whole module (or at least the sks-db systemd unit).
The systemd unit was failing with the following error message:
...-unit-script-sks-db-pre-start[xxx]: KDB/DB_CONFIG exists but is not a symlink.
The build error has been introduced by 56dcc319cf.
Using a <simplesect/> within a <para/> is not allowed and subsequently
fails to validate while building the manual.
So instead, I moved the <simplesect/> further down and outside of the
<para/> to fix this.
Signed-off-by: aszlig <aszlig@nix.build>
Cc: @aaronjanse, @Lassulus, @danbst
The default config of i3 provides a key binding to reload, so changes
take effect immediately:
```
bindsym $mod+Shift+c reload
```
Unfortunately the current module uses the store path of the `configFile`
option. So when I change the config in NixOS, a store path will be
created, but the current i3 process will continue to use the old one,
hence a restart of i3 is required currently.
This change links the config to `/etc/i3/config` and alters the X
startup script accordingly so after each rebuild, the config can be
reloaded.
This allows configuring IP addresses on a tinc interface using
networking.interfaces."tinc.${n}".ipv[46].addresses.
Previously, this would fail with timeouts, because of the dependency
chain
tinc.${netname}.service
--after--> network.target
--after--> network-addresses-tinc.${n}.service (and network-link-…)
--after--> sys-subsystem-net-devices-tinc.${n}.device
But the network interface doesn't exist until tinc creates it! So
systemd waits in vain for the interface to appear, and by then the
network-addresses-* and network-link-* units have failed. This leads
to the network link not being brought up and the network addresses not
being assigned, which in turn stops tinc from actually working.
cross-compilation of `btrfs-tools` is broken, and this usually needless dependency of each system closure on `btrfs-tools` prevents cross-compilation of whole system closures
Ideally, private keys never leave the host they're generated on - like
SSH. Setting generatePrivateKeyFile to true causes the PK to be
generate automatically.
Some ACME clients do not generate full.pem, which is the same as
fullchain.pem + the certificate key (key.pem), which is not necessary
for verifying OCSP staples.
I have a nixops network where I deploy containers using the `container`
backend which uses `nixos-container` intenrally to deploy several
containers to a certain host.
During that time I removed and added new containers and while trying to
deploy those to a different host I realized that it isn't guaranteed
that each container gets the same IP address which is a problem as some
parts of the deployment need to know which container is using which IP
(i.e. to configure port forwarding on the host).
With this change you can specify the container's IP like this (and don't
have to use the arbitrarily used 10.233.0.0/16 subnet):
```
$ nixos-container create test --config-file test-container.nix \
--local-address 10.235.1.2 --host-address 10.235.1.1
```
This is an implementation of wireguard support using wg-quick config
generation.
This seems preferrable to the existing wireguard support because
it handles many more routing and resolvconf edge cases than the
current wireguard support.
It also includes work-arounds to make key files work.
This has one quirk:
We need to set reverse path checking in the firewall to false because
it interferes with the way wg-quick sets up its routing.
This is to make sure that we get different ETag values whenever we
switch to a different store path but with the same file contents.
I've checked this against the old behaviour without the patch and it
fails as expected.
Signed-off-by: aszlig <aszlig@nix.build>
Copy-paste from iso-image.nix
Besides the simplification, it should use `pkgs.buildPackages.squashfsTools` because it is used in `nativeBuildInputs` instead of incorrect `pkgs.squashfsTools` which was forced by `import'