This commit splits the `buildPythonPackage` into multiple setup hooks.
Generally, Python packages are built from source to wheels using `setuptools`.
The wheels are then installed with `pip`. Tests were often called with
`python setup.py test` but this is less common nowadays. Most projects
now use a different entry point for running tests, typically `pytest`
or `nosetests`.
Since the wheel format was introduced more tools were built to generate these,
e.g. `flit`. Since PEP 517 is provisionally accepted, defining a build-system
independent format (`pyproject.toml`), `pip` can now use that format to
execute the correct build-system.
In the past I've added support for PEP 517 (`pyproject`) to the Python
builder, resulting in a now rather large builder. Furthermore, it was not possible
to reuse components elsewhere. Therefore, the builder is now split into multiple
setup hooks.
The `setuptoolsCheckHook` is included now by default but in time it should
be removed from `buildPythonPackage` to make it easier to use another hook
(curently one has to pass in `dontUseSetuptoolsCheck`).
This is a new package that provides a shell hook to make it easy to
declare manpages and shell completions in a manner that doesn't require
remembering where to actually install them. Basic usage looks like
{ stdenv, installShellFiles, ... }:
stdenv.mkDerivation {
# ...
nativeBuildInputs = [ installShellFiles ];
postInstall = ''
installManPage doc/foobar.1
installShellCompletion --bash share/completions/foobar.bash
installShellCompletion --fish share/completions/foobar.fish
installShellCompletion --zsh share/completions/_foobar
'';
# ...
}
See source comments for more details on the functions.
This setup hook modifies a Perl script so that any "-I" flags in its shebang
line are rewritten into a "use lib ..." statement on the next line. This gets
around a limitation in Darwin, which will not properly handle a script whose
shebang line exceeds 511 characters.
This system type was previously broken but is now fixed.
Add it here to showcase the common task of launching a fully-fledged Android
system with an included app store.
New release available:
https://www.citrix.com/downloads/workspace-app/linux/workspace-app-for-linux-latest.html
Apart from the new version the following things changed:
* Updated the docs as all notes about `citrix_receiver` also apply for
`citrix_workspace`. Also added a deprecation warning about the
upcoming removal.
* Removed the `libidn_134` override as neither `citrix_workspace_19_3_0`
nor `citrix_workspace_19_6_0` require this library anymore according
to `readelf -d ./result/opt/citrix-icaclient/wfica` (in contrast to
`citrix_receiver_13_10_0`).
* Added myself as maintainer as well.
Motivation: There is a thriving plugin ecosystem for Kakoune now,
and it is nice to add these in our Nix configurations. This was modeled
on neovim's plugins.
parinfer-rust is useable both standalone and as a Kakoune plugin,
so the plugin file inherits the same definition as pkgs.
I'll make PRs for other plugins if this gets accepted.
[Here](https://github.com/eraserhd/nixpkgs/tree/kak-ansi)'s a tested
branch for the `kak-ansi` plugin.
* manual: rename to users and contributors manual, add some user notes that should be there but don't fit in any chapter
* manual: move the package notes that are completely usage-related to the upper user notes section
* manual: link to package-specific development notes from user notes
With remote builds, the sandbox can't be accessed by `cntr` as it is on
a different machine. I decided to put this into an extra `note` block as it took
me admittedly too much time to figure this out.
There was a bunch of stuff in the cross section that haddn't had any
attention in a while. I might need to slim it down later, but this is
good for now.
$(shell ...) looks a little sketch like it will be run no matter what.
And there are problems building the manual on darwin so hopefully this
fixes them.
The function buildGoModule builds Go programs managed with Go modules. It builds
a Go module through a two phase build:
- An intermediate fetcher derivation. This derivation will be used to
fetch all of the dependencies of the Go module.
- A final derivation will use the output of the intermediate derivation
to build the binaries and produce the final output.
Especially older hardware doesn't support AVX instructions. DLib is
still functional there, but significantly slower[1].
By setting `avxInstructions` to false, DLib will be compiled without
this feature.
[1] http://dlib.net/compile.html
Whenever we create scripts that are installed to $out, we must use runtimeShell
in order to get the shell that can be executed on the machine we create the
package for. This is relevant for cross-compiling. The only use case for
stdenv.shell are scripts that are executed as part of the build system.
Usages in checkPhase are borderline however to decrease the likelyhood
of people copying the wrong examples, I decided to use runtimeShell as well.
The appimageTools attrset contains utilities to prevent
the usage of appimage-run to package AppImages, like done/attempted
in #49370 and #53156.
This has the advantage of allowing for per-package environment changes,
and extracts into the store instead of the users home directory.
The package list was extracted into appimageTools to prevent
duplication.
Since #53055 was merged the Makefile for the manual could not be run
correctly as the generated function documentation was included, but
not actually generated.
This adds the necessary generation step by first building the XML file
containing function locations and preserving its store path in a
variable, which is then used both for linking of the locations file
and as a build input for the function docs generator.
This fixes #55014
Comments on conflicts:
- llvm: d6f401e1 vs. 469ecc70 - docs for 6 and 7 say the default is
to build all targets, so we should be fine
- some pypi hashes: they were equivalent, just base16 vs. base32
This is useful when running tools like NixOps or nix-review
on workstations where the upload to the builder is significantly
slower then downloading the source on the builder itself.
We can't run the checkPhase when build != host, so we may as well make
the checkInputs native.
This signicantly improves the situation of Python packages when enabling
strictDeps.
Currently the manual scales to the view port of the browser.
This leads to an unreadable layout and I found myself
reading the xml source instead.
The optimal width would be around 50 characters per line.
Since we have code listings also in the manual I relaxed
this limit a bit towards 70 characters per line.
Modifies the build process of the manual to invoke nixdoc
automatically to generate XML files with function documentation.
Currently documentation is present for five of the files in `lib/`.
To add another file to the generated docs, both
`doc/functions/library.xml` and `doc/lib-function-docs.nix` must be
updated.
Since Intel's default openmp implementation is available in the same src
tarball, we can just include it in the package. This means that `mkl` now "just
works" without any environment variables, fragile setup-hooks, or forced
propagation.
Since the openmp implementation is only needed at runtime (and for test cases),
users can substitute a different one if they prefer by exporting it with
`LD_PRELOAD`, which is how Intel recommends handling this. If they do not do so,
`libiomp.so` lives next to `libmkl_rt.so` and thus will be in the RPATH as a
sane default.
Since this still comes from the same src tarball, we can ship it without losing
the fixed-output derivation; likewise, since Hydra is not building or caching
these, shipping these proprietary packages costs no bandwidth for the nix
community.
To make updating large attribute sets faster, the update scripts
are now run in parallel.
Please note the following changes in semantics:
- The string passed to updateScript needs to be a path to an executable file.
- The updateScript can also be a list: the tail elements will then be passed
to the head as command line arguments.
Encouraging to put container elements on their own lines to minimize
diffs, merge conflicts and making re-ordering easier.
Nix doesn't suffer the restrictions of other languages where commas are
used to separate list items.
First of all, this makes the existing documentation a bit more clear on
what autoPatchelfHook is all about, because after discussing with
@svanderburg - who wrote a similar implementation - the rationale about
autoPatchelfHook wasn't very clear in the documentation.
I also added the recent changes around being able to use autoPatchelf
manually and the new --no-recurse flag.
Signed-off-by: aszlig <aszlig@nix.build>
It's incorrect (preferLocalBuild does not prevent uploading to binary
caches) and is not a stdenv attribute (it's already documented in the
Nix manual).
Python 3.4 will receive it's final patch release in March 2019 and there won't
be any releases anymore after that, so also not during NixOS 2019.03.
Python 3.4 is not used anymore in Nixpkgs. In any case, migrating code from
3.4 to 3.4+ is trivial.
This commit renames the pythondaemon module to match its module name, github
name, and pypi name, which makes it easier to find and reference. In order to
avoid breaking any external users, I've left an alias with a deprecated warning.
Rationale
---------
Currently, tests are hard to discover. For instance, someone updating
`dovecot` might not notice that the interaction of `dovecot` with
`opensmtpd` is handled in the `opensmtpd.nix` test.
And even for someone updating `opensmtpd`, it requires manual work to go
check in `nixos/tests` whether there is actually a test, especially
given not so many packages in `nixpkgs` have tests and this is thus most
of the time useless.
Finally, for the reviewer, it is much easier to check that the “Tested
via one or more NixOS test(s)” has been checked if the file modified
already includes the list of relevant tests.
Implementation
--------------
Currently, this commit only adds the metadata in the package. Each
element of the `meta.tests` attribute is a derivation that, when it
builds successfully, means the test has passed (ie. following the same
convention as NixOS tests).
Future Work
-----------
In the future, the tools could be made aware of this `meta.tests`
attribute, and for instance a `--with-tests` could be added to
`nix-build` so that it also builds all the tests. Or a `--without-tests`
to build without all the tests. @Profpatsch described in his NixCon talk
such systems.
Another thing that would help in the future would be the possibility to
reasonably easily have cross-derivation nix tests without the whole
NixOS VM stack. @7c6f434c already proposed such a system.
This RFC currently handles none of these concerns. Only the addition of
`meta.tests` as metadata to be used by maintainers to remember to run
relevant tests.
The `name` arg of `vim_configurable.customize` does not only determine
the package name, but also the name of the command/ executable to be
called.
In my opinion this is not documented properly and finding that out took
me several hours.
Allows for adding Perl libraries in the same way as for Python. Doesn't
really need to be a function, since there's only one perlPackages in
nixpkgs, but I went for consistency with the python plugin.
This touches up a handful of places in the python documentation to try to make
the current best-practices more obvious. In particular, I often find the
function signatures (what to pass, what not to pass) confusing and have added
them to the docs.
Also updated the metas to be more consistent with the most frequently used
modern style.
Hydra passes the full revision in to the input, which we pass through.
If we don't get this ,we try to get it from other sources, or default to
master which should have the definition in a close-ish location.
All published docs should have theURL resolve properly, only local
hackers will have the link break.
Create a many-layered Docker Image.
Implements much less than buildImage:
- Doesn't support specific uids/gids
- Doesn't support runninng commands after building
- Doesn't require qemu
- Doesn't create mutable copies of the files in the path
- Doesn't support parent images
If you want those feature, I recommend using buildLayeredImage as an
input to buildImage.
Notably, it does support:
- Caching low level, common paths based on a graph traversial
algorithm, see referencesByPopularity in
0a80233487993256e811f566b1c80a40394c03d6
- Configurable number of layers. If you're not using AUFS or not
extending the image, you can specify a larger number of layers at
build time:
pkgs.dockerTools.buildLayeredImage {
name = "hello";
maxLayers = 128;
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
- Parallelized creation of the layers, improving build speed.
- The contents of the image includes the closure of the configuration,
so you don't have to specify paths in contents and config.
With buildImage, paths referred to by the config were not included
automatically in the image. Thus, if you wanted to call Git, you
had to specify it twice:
pkgs.dockerTools.buildImage {
name = "hello";
contents = [ pkgs.gitFull ];
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
buildLayeredImage on the other hand includes the runtime closure of
the config when calculating the contents of the image:
pkgs.dockerTools.buildImage {
name = "hello";
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
Minor Problems
- If any of the store paths change, every layer will be rebuilt in
the nix-build. However, beacuse the layers are bit-for-bit
reproducable, when these images are loaded in to Docker they will
match existing layers and not be imported or uploaded twice.
Common Questions
- Aren't Docker layers ordered?
No. People who have used a Dockerfile before assume Docker's
Layers are inherently ordered. However, this is not true -- Docker
layers are content-addressable and are not explicitly layered until
they are composed in to an Image.
- What happens if I have more than maxLayers of store paths?
The first (maxLayers-2) most "popular" paths will have their own
individual layers, then layer #(maxLayers-1) will contain all the
remaining "unpopular" paths, and finally layer #(maxLayers) will
contain the Image configuration.
The `overrideScope` bound by `makeScope` (via special `callPackage`)
took an override in the form `super: self { … }`. But this is
dangerously close to the `self: super { … }` form used by *everything*
else, even other definitions of `overrideScope`! Since that
implementation did not even share any code either until I changed it
recently in 3cf43547f4, this inconsistency
is almost certainly an oversight and not intentional.
Unfortunately, just as the inconstency is hard to debug if one just
assumes the conventional order, any sudden fix would break existing
overrides in the same hard-to-debug way. So instead of changing the
definition a new `overrideScope'` with the conventional order is added,
and old `overrideScope` deprecated with a warning saying to use
`overrideScope'` instead. That will hopefully get people to stop using
`overrideScope`, freeing our hand to change or remove it in the future.
For technical reasons, we cannot easily add a warning to top-level
definitions, so 2a6e4ae49a and
e51f736076 reverted the deprecation. But
we can still remove mention of the would-be deprecated definitions to
steer people towards using the preferred alternatives.
Because dates are an impurity, by default buildImage will use a static
date of one second past the UNIX Epoch. This can be a bit frustrating
when listing docker images in the CLI:
$ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
If you want to trade the purity for a better user experience, you can
set created to now.
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
and now the Docker CLI will display a reasonable date and sort the
images as expected:
$ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
This package providesa completion input method for faster typing.
See https://mike-fabian.github.io/ibus-typing-booster
Detailed instructions how to activate this IBus engine on your desktop
can be found in the upstream docs: https://mike-fabian.github.io/ibus-typing-booster/documentation.html
A simple VM with the Gnome3 desktop and activated `ibus' looks like
this:
```nix
{
emojipicker = { pkgs, ... }: {
services.xserver = {
enable = true;
desktopManager.gnome3.enable = true;
desktopManager.xterm.enable = false;
};
users.extraUsers.vm = {
password = "vm";
isNormalUser = true;
};
i18n.inputMethod.ibus.engines = [
pkgs.ibus-engines.typing-booster
];
i18n.inputMethod.enabled = "ibus";
virtualisation.memorySize = 2048;
};
}
```
Fixes #38721
A new python script has been added to replace the aged viml-based
updater. The new updater has the following advantages:
- use rss feeds to check for updates quicker
- parallel downloads & better caching
- uses proper override mechanism instead of text substitution
- update generated files in-place instead of having to insert updated plugins manually
Automatically reading `dependencies` from the plugins directory has been
not re-implemented.
This has been mostly been used by Mark Weber's plugins, which seem to
no longer receive regular updates.
This could be implemented in future as required.
This aims to make the `weechat` package even more configurable. It
allows to specify scripts and commands using the `configure` function
inside a `weechat.override` expression.
The package can be configured like this:
```
with import <nixpkgs> { };
weechat.override {
plugins = { availablePlugins, ... }: {
plugins = builtins.attrValues availablePlugins;
init = ''
/set foo bar
/server add freenode chat.freenode.org
'';
scripts = [ "/path/to/script.py" ];
};
}
```
All commands are passed to `weechat --run-command "/set foo bar;/server ..."`.
The `plugins' attribute is not necessarily required anymore, if it's
sufficient to add `init' commands, the `plugins' will be
`builtins.attrValues availablePlugins' by default.
Additionally the result contains `weechat` and `weechat-headless`
(introduced in WeeChat 2.1) now.
I don't know when we can/should remove them, but this at least gets
people to stop using them. The preferred alternatives also date back to
17.09 so writing forward-compatable code without extra conditions is
easy.
Beginning with these as they are the least controversial.