Add new option declarations to control what information is published
by the avahi daemon. The default values are chosen to respect the
privacy of the user over the connectivity of the system.
The kernel default for `link_power_management_policy` is `"max_performance"`.
This commit:
f169f60575
set the NixOS default to `"min_performance"`.
This issue (https://github.com/NixOS/nixpkgs/issues/11276) details my long
journey to discover this after several file system failures incorrectly
attributed to `TRIM` and `NCQ` settings.
I think we should use the kernel default of `"max_performance"` to assure
the best experience for new users with SSDs and to conform to the defaults of
the kernel and other distros.
The three KDE package sets now have circular dependencies between them,
so they can only be built if they are merged into a single package set
during evaluation.
- if xserver.tty and/or display are set to null, then don't specify
them, or the -logfile argument in the xserverArgs
- For lightdm, we set default tty and display to null and we determine
those at runtime based on arguments passed. This is necessary because
we run multiple X servers so they can't all be on the same display
This reverts commit 02b568414d.
With a5bc11f and 6353f58 in place, we really don't need this anymore.
After running about 500 VM tests on my Hydra, it still didn't improve
very much.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
As @domenkozar noted in #10828, cache=writeback seems to do more harm
than good:
https://github.com/NixOS/nixpkgs/issues/10828#issuecomment-164426821
He has tested it using the openstack NixOS tests and found that
cache=none significantly improves startup performance.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This seems to be the root cause of the random page allocation failures
and @wizeman did a very good job on not only finding the root problem
but also giving a detailed explanation of it in #10828.
Here is an excerpt:
The problem here is that the kernel is trying to allocate a contiguous
section of 2^7=128 pages, which is 512 KB. This is way too much:
kernel pages tend to get fragmented over time and kernel developers
often go to great lengths to try allocating at most only 1 contiguous
page at a time whenever they can.
From the error message, it looks like the culprit is unionfs, but this
is misleading: unionfs is the name of the userspace process that was
running when the system ran out of memory, but it wasn't unionfs who
was allocating the memory: it was the kernel; specifically it was the
v9fs_dir_readdir_dotl() function, which is the code for handling the
readdir() function in the 9p filesystem (the filesystem that is used
to share a directory structure between a qemu host and its VM).
If you look at the code, here's what it's doing at the moment it tries
to allocate memory:
buflen = fid->clnt->msize - P9_IOHDRSZ;
rdir = v9fs_alloc_rdir_buf(file, buflen);
If you look into v9fs_alloc_rdir_buf(), you will see that it will try
to allocate a contiguous buffer of memory (using kzalloc(), which is a
wrapper around kmalloc()) of size buflen + 8 bytes or so.
So in reality, this code actually allocates a buffer of size
proportional to fid->clnt->msize. What is this msize? If you follow
the definition of the structures, you will see that it's the
negotiated buffer transfer size between 9p client and 9p server. On
the client side, it can be controlled with the msize mount option.
What this all means is that, the reason for running out of memory is
that the code (which we can't easily change) tries to allocate a
contiguous buffer of size more or less equal to "negotiated 9p
protocol buffer size", which seems to be way too big (in our NixOS
tests, at least).
After that initial finding, @lethalman tested the gnome3 gdm test
without setting the msize parameter at all and it seems to have resolved
the problem.
The reason why I'm committing this without testing against all of the
NixOS VM test is basically that I think we can only go better but not
worse than the current state.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
- Add new service for `clamd`, the ClamAV daemon.
- Replace the old upstart "jobs" section with systemd.services
- Remove unnecessary config options.
- Use `mkEnableOption`
We hit page allocation failures a lot at random for VM tests, in case of
my own Hydra when it comes to the installer tests. The reason for this
is that once the memory of the VM gets heavily fragmented the kernel is
unable to allocate new pages.
Setting vm.min_free_kbytes to 16MB forces the kernel to keep a minimum
of 16 MB free.
I've done some testing accross repeated runs of the installer tests with
and without vm.min_free_kbytes set. So accross 30 test runs for each
settings, all of the tests with the option being set passed while 14
tests without that sysctl option triggered page allocation failures.
Sure, running 30 tests is not a guarantee that 16MB is enough, but we'll
see how it turns out in the long run across all VM tests.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Adds a new service module for shairport-sync. Tested with a local
and remote pulseaudio server. Needs to be run as a user in the pulse group
to access pulseaudio.
These options allow setting the start and stop scripts for the display
manager. Making these configurable is necessary to allow some hardware
configurations. Upstream ships empty scripts by default, anyway.
This new service invokes `simp_le` for a defined set of certs on a regular
basis with a systemd timer. `simp_le` is smart enough to handle account
registration, domain validation and renewal on its own. The only thing
required is an existing HTTP server that serves the path
`/.well-known/acme-challenge` from the webroot cert parameter.
Example:
services.simp_le.certs."foo.example.com" = {
webroot = "/var/www/challenges";
extraDomains = [ "www.example.com" ];
email = "foo@example.com";
validMin = 2592000;
renewInterval = "weekly";
};
Example Nginx vhost:
services.nginx.appendConfig = ''
http {
server {
server_name _;
listen 80;
listen [::]:80;
location /.well-known/acme-challenge {
root /var/www/challenges;
}
location / {
return 301 https://$host$request_uri;
}
}
}
'';