Pull #89453 introduced a bug in the documentation that is preventing the
hydra build for nixpkgs-unstable from finishing. I have added the
additional option indroduced in that patch (runVend for go modules) and
added the callout tag so that the documenation can build again.
In #89806 it has been reported that the final package is missing a lot
of features like support for the self-service GUI and the
config-management.
While working on supporting those components in the Nix-package, I
decided to refactor the package to simplify the entire setup.
This patch changes the following things:
* Binaries and libraries are patched using the `autoPatchelfHook` to
avoid having unneeded libraries linked (e.g. some programs use gtk2,
others use gtk3).
* Moved source-declarations into their own file.
* Wrapped `configmgr` and `selfservice` and added those to `$out/bin`.
* Don't mention the old `citrix_receiver`-packages in the manual anymore
since those packages were removed in 19.09 and are EOLed anyways.
Closes#89806
/build/doc/manual-full.xml:12764:35: error: ID "build-phase" has already been defined
/build/doc/manual-full.xml:9029:33: error: first occurrence of ID "build-phase"
We no longer need it for most use cases so I am making it experimental.
I have something in mind where it might be useful in the future (customizing commit messages)
but for now, it would only confuse people.
Instead of having the updateScript support returning JSON object,
it should be sufficient to specify attrPath in passthru.updateScript.
It is much easier to use.
The former is now considered experimental.
Update scripts can now declare features using
passthru.updateScript = {
command = [ ../../update.sh pname ];
supportedFeatures = [ "commit" ];
};
A `commit` feature means that when the update script finishes successfully,
it will print a JSON list like the following:
[
{
"attrPath": "volume_key",
"oldVersion": "0.3.11",
"newVersion": "0.3.12",
"files": [
"/path/to/nixpkgs/pkgs/development/libraries/volume-key/default.nix"
]
}
]
and data from that will be used when update.nix is run with --argstr commit true
to create commits.
We will create a new git worktree for each thread in the pool and run the update
script there. Then we will commit the change and cherry pick it in the main repo,
releasing the worktree for a next change.
This adds the `validatePkgConfig` hook, which can be used to validate
pkg-config files in the output(s). Currently, this will just run
`pkg-config --validate` on all `.pc` files, capturing errors such as
the issue that was fixed in #87789.
The hook could be extended in the future with more fine-grained
checks.
Based on some feedback in #87094 and discussion with @fridh, this re-organizes
the onboarding tutorial in the Nixpkgs manual's python section, so that we start
with the simplest, most ad-hoc examples and work our way up. This progresses
from:
1. How to create an temporary python env at the cmdline, then
2. How to create a specific python env for a single script, then
3. How to create a specific python env for a project in a shell.nix, then
4. How to install a specific python env globally on the system or in a user profile.
Additionally, I've tried to standardize on some of the "best practice" ways of
doing things:
1. Instead of saying that this command style is "supported but strongly not
discouraged", I've just deleted it to avoid confusion.
Bad: nix-shell -p python38Packages.numpy python38Packages.toolz
Good: nix-shell -p 'python38.withPackages(ps: with ps; [ numpy toolz ])'
2. In the portion where we show how to add stuff to the user's
`XDG_CONFIG_HOME`, use overlays instead of `config.nix`. The former can do
everything the latter can do, but is also much more generic and powerful,
because it can compose with other files, compose with other envs, compose
with overlays that do things like swap whether tensorflow and pytorch are
built openblas/mkl/cuda stacks, and so on. The user is eventually going to
see the overlay, so to avoid confusion let's standardize on it.
An overlay by any other name would function just as well, but we generally use
`self: super:` for the regular overlays, and `python-self: python-super`.
Since the introduction of php.unwrapped there's no real need for the
phpXXbase attributes, so let's remove them to lessen potential
confusion and clutter. Also update the docs to make it clear how to
get hold of an unwrapped PHP if needed.
Rework withExtensions / buildEnv to handle currently enabled
extensions better and make them compatible with override. They now
accept a function with the named arguments enabled and all, where
enabled is a list of currently enabled extensions and all is the set
of all extensions. This gives us several nice properties:
- You always get the right version of the list of currently enabled
extensions
- Invocations chain
- It works well with overridden PHP packages - you always get the
correct versions of extensions
As a contrived example of what's possible, you can add ImageMagick,
then override the version and disable fpm, then disable cgi, and
lastly remove the zip extension like this:
{ pkgs ? (import <nixpkgs>) {} }:
with pkgs;
let
phpWithImagick = php74.withExtensions ({ all, enabled }: enabled ++ [ all.imagick ]);
phpWithImagickWithoutFpm743 = phpWithImagick.override {
version = "7.4.3";
sha256 = "wVF7pJV4+y3MZMc6Ptx21PxQfEp6xjmYFYTMfTtMbRQ=";
fpmSupport = false;
};
phpWithImagickWithoutFpmZip743 = phpWithImagickWithoutFpm743.withExtensions (
{ enabled, all }:
lib.filter (e: e != all.zip) enabled);
phpWithImagickWithoutFpmZipCgi743 = phpWithImagickWithoutFpmZip743.override {
cgiSupport = false;
};
in
phpWithImagickWithoutFpmZipCgi743
only very few people followed the strict policy in the last 5 years. the
maintainers accept backports without reason when it's obvious, so i
updated the policy to reflect that
* ghcHEAD: bump to 8.11.20200403
* ghcHead: reduce diff vs. 8.10.1
dontAddExtraLibs was removed by accident (IMO) in ea19a8ed1e
* ghcHEAD: add ability to use system libffi
- enable nixpkgs' libffi
- minimise diffs against 8.10.1
- remove patching
* remove configure warning about --with-curses-includes
configure: WARNING: unrecognized options: --with-curses-includes
This provides a means to build a PHP package based on a list of
extensions from another.
For example, to generate a package with all default extensions
enabled, except opcache, but with ImageMagick:
php.withExtensions (e:
(lib.filter (e: e != php.extensions.opcache) php.enabledExtensions)
++ [ e.imagick ])
So now we have only packages for human interaction in php.packages and
only extensions in php.extensions. With this php.packages.exts have
been merged into the same attribute set as all the other extensions to
make it flat and nice.
The nextcloud module have been updated to reflect this change as well
as the documentation.
- Use git.Repo(ROOT, search_parent_directories=True) to find nixpkgs
repo.
- Don't commit overrides.nix.
- Remove "-a" short argument.
- Remove "--commit" flag and commit by default.
- Improve help/error messages.
- Favor closure pattern over classes.Use a closure to wrap the update
function with state rather than a callable class.
- break NixpkgsRepo class into functions
- Optional None-type arguments
- Remove repo checks from update.py. Git is too flexible and permits too
many workflows for my attempt to replace documentation with code to work.
My goal would be to separate the `--add` functionality from the update
functionality in the near term and then there will be no reason for this
usage to create commits anyway.
This reverts commit 5e8545e723.
It breaks eval:
attribute 'rev' missing, at /var/lib/ofborg/checkout/repo/38dca4e3aa6bca43ea96d2fcc04e8229/mr-est/eval-0-gleber.ewr1.nix.ci/pkgs/top-level/make-tarball.nix:106:39
I din't try to pinpoint the exact commit, but we started getting:
> The extension smart is not supported for docbook
Reading pandoc docs, I can't see what use to us "smart" could be
when writing the in-between docbook (to be converted to html).
https://pandoc.org/MANUAL.html#extension-smart
Previously, we would asssert that the lockfiles are consistent during the
unpackPhase, but if the pkg has a patch for the lockfile itself then we must
wait until the patchPhase is complete to check.
This also removes an implicity dependency on the src attribute coming from
`fetchzip` / `fetchFromGitHub`, which happens to name the source directory
"source". Now we glob for it, so different fetchers will work consistently.
This has several advantages:
1. It takes up less space on disk in-between builds in the nix store.
2. It uses less space in the binary cache for vendor derivation packages.
3. It uses less network traffic downloading from the binary cache.
4. It plays nicely with hashed mirrors like tarballs.nixos.org, which only
substitute --flat hashes on single files (not recursive directory hashes).
5. It's consistent with how simple `fetchurl` src derivations work.
6. It provides a stronger abstraction between input src-package and output
package, e.g., it's harder to accidentally depend on the src derivation at
runtime by referencing something like `${src}/etc/index.html`. Likewise, in
the store it's harder to get confused with something that is just there as a
build-time dependency vs. a runtime dependency, since the build-time
src dependencies are tarred up.
Disadvantages are:
1. It takes slightly longer to untar at the start of a build.
As currently implemented, this attaches the compacted vendor.tar.gz feature as a
rider on `verifyCargoDeps`, since both of them are relatively newly implemented
behavior that change the `cargoSha256`.
If this PR is accepted, I will push forward the remaining rust packages with a
series of treewide PRs to update the `cargoSha256`s.
No material changes to docs, but trying to sanitize them for consistent
readability prior to looking at #75837.
- Use `*` for lists instead of `-`. I have no opinion one way or the other, but
the latter was only used in 1-2 places.
- Pad the code blocks with whitespace.
- Wrap to 80 characters, except for a few 1-liners that were only slightly over.
When updating the section to python 3 some places still
referred to pythonPackages and were overlooked.
Decided to switch it to be more similar to the first
example binding pythonPackages and clarified comments a
bit based on confusion I observed on IRC.
Related to https://github.com/NixOS/nixpkgs/pull/77569
Updating section about imperative use of ad-hoc virtual-environments for
use of pythons built-in `venv` module via venvShellHook. Also trying to
make it a bit friendlier to beginners by adding a bit more explanation
to the code snippet and some remarks old-school virtualenv.
Adjusting for venvShellHook and adding manual example
Adding pip install and replacing python2 example with python3
When modSha256 is null, disable the nix sandbox instead of using a
fixed-output derivation. This requires the nix-daemon to have
`sandbox = relaxed` set in their config to work properly.
Because the output is (hopefully) deterministic based on the inputs,
this should give a reproducible output. This is useful for development
outside of nixpkgs where re-generating the modSha256 on each mod.sum
changes is cumbersome.
Don't use this in nixpkgs! This is why null is not the default value.
We shouldn’t force the user to have a C compiler in scope, just
because the derivation is forced to build locally. That can’t be
counted as “lightweight” anymore.
Co-Authored-By: Silvan Mosberger<contact@infinisil.com>
This has the same motivation as fetchFromGitHub/fetchFromGitLab --
it's cheaper to download a tarball of a single revision than it is to
download a whole history.
I could have gone with domain/group/repo, like fetchFromGitLab, but it
would have made implementation more difficult, and this syntax means
it's a drop-in replacement for fetchgit, so I decided it wasn't worth
it.
The package set is not maintained. It is also not used by most of the
BEAM community. Removing it to allow a more useful set of tools fit to
the BEAM community in Nixpkgs.
In my opinion Functions should only contain pure functions. These are
both meant to provide derivations so I put them under Builders. Don't
know exactly *where* to put them so "special" it is...
Reorganize the chapters into parts and reduce the TOC depth to make the
TOC useful again. The top-level TOC is very brief, but that is fine
because every part will have its own TOC.
Section titles of languages/frameworks are also simplified to just
the name of the language/framework.
@garbas and @seppeljordan, are these updates correct?
I removed `offlinehacker/pypi2nix` as an unmaintained ancestor of the current repo `nix-community/pypi2nix`. It appears @garbas forked `offlinehacker/pypi2nix` to `garbas/pypi2nix` and then handed off maintainership to @seppeljordan, transferring the repo to `nix-community/pypi2nix`.
One issue with cargoSha256 is that it's hard to detect when it needs to
be updated or not. It's possible to upgrade a package and forget to
update cargoSha256 and run with old versions of the program or
libraries.
This commit introduces `verifyCargoDeps` which, when enabled, will check
that the Cargo.lock is not out of date in the cargoDeps by comparing it
with the package source.
This commit splits the `buildPythonPackage` into multiple setup hooks.
Generally, Python packages are built from source to wheels using `setuptools`.
The wheels are then installed with `pip`. Tests were often called with
`python setup.py test` but this is less common nowadays. Most projects
now use a different entry point for running tests, typically `pytest`
or `nosetests`.
Since the wheel format was introduced more tools were built to generate these,
e.g. `flit`. Since PEP 517 is provisionally accepted, defining a build-system
independent format (`pyproject.toml`), `pip` can now use that format to
execute the correct build-system.
In the past I've added support for PEP 517 (`pyproject`) to the Python
builder, resulting in a now rather large builder. Furthermore, it was not possible
to reuse components elsewhere. Therefore, the builder is now split into multiple
setup hooks.
The `setuptoolsCheckHook` is included now by default but in time it should
be removed from `buildPythonPackage` to make it easier to use another hook
(curently one has to pass in `dontUseSetuptoolsCheck`).
This is a new package that provides a shell hook to make it easy to
declare manpages and shell completions in a manner that doesn't require
remembering where to actually install them. Basic usage looks like
{ stdenv, installShellFiles, ... }:
stdenv.mkDerivation {
# ...
nativeBuildInputs = [ installShellFiles ];
postInstall = ''
installManPage doc/foobar.1
installShellCompletion --bash share/completions/foobar.bash
installShellCompletion --fish share/completions/foobar.fish
installShellCompletion --zsh share/completions/_foobar
'';
# ...
}
See source comments for more details on the functions.
This setup hook modifies a Perl script so that any "-I" flags in its shebang
line are rewritten into a "use lib ..." statement on the next line. This gets
around a limitation in Darwin, which will not properly handle a script whose
shebang line exceeds 511 characters.
This system type was previously broken but is now fixed.
Add it here to showcase the common task of launching a fully-fledged Android
system with an included app store.
New release available:
https://www.citrix.com/downloads/workspace-app/linux/workspace-app-for-linux-latest.html
Apart from the new version the following things changed:
* Updated the docs as all notes about `citrix_receiver` also apply for
`citrix_workspace`. Also added a deprecation warning about the
upcoming removal.
* Removed the `libidn_134` override as neither `citrix_workspace_19_3_0`
nor `citrix_workspace_19_6_0` require this library anymore according
to `readelf -d ./result/opt/citrix-icaclient/wfica` (in contrast to
`citrix_receiver_13_10_0`).
* Added myself as maintainer as well.
Motivation: There is a thriving plugin ecosystem for Kakoune now,
and it is nice to add these in our Nix configurations. This was modeled
on neovim's plugins.
parinfer-rust is useable both standalone and as a Kakoune plugin,
so the plugin file inherits the same definition as pkgs.
I'll make PRs for other plugins if this gets accepted.
[Here](https://github.com/eraserhd/nixpkgs/tree/kak-ansi)'s a tested
branch for the `kak-ansi` plugin.
* manual: rename to users and contributors manual, add some user notes that should be there but don't fit in any chapter
* manual: move the package notes that are completely usage-related to the upper user notes section
* manual: link to package-specific development notes from user notes
With remote builds, the sandbox can't be accessed by `cntr` as it is on
a different machine. I decided to put this into an extra `note` block as it took
me admittedly too much time to figure this out.
There was a bunch of stuff in the cross section that haddn't had any
attention in a while. I might need to slim it down later, but this is
good for now.
$(shell ...) looks a little sketch like it will be run no matter what.
And there are problems building the manual on darwin so hopefully this
fixes them.
The function buildGoModule builds Go programs managed with Go modules. It builds
a Go module through a two phase build:
- An intermediate fetcher derivation. This derivation will be used to
fetch all of the dependencies of the Go module.
- A final derivation will use the output of the intermediate derivation
to build the binaries and produce the final output.
Especially older hardware doesn't support AVX instructions. DLib is
still functional there, but significantly slower[1].
By setting `avxInstructions` to false, DLib will be compiled without
this feature.
[1] http://dlib.net/compile.html
Whenever we create scripts that are installed to $out, we must use runtimeShell
in order to get the shell that can be executed on the machine we create the
package for. This is relevant for cross-compiling. The only use case for
stdenv.shell are scripts that are executed as part of the build system.
Usages in checkPhase are borderline however to decrease the likelyhood
of people copying the wrong examples, I decided to use runtimeShell as well.
The appimageTools attrset contains utilities to prevent
the usage of appimage-run to package AppImages, like done/attempted
in #49370 and #53156.
This has the advantage of allowing for per-package environment changes,
and extracts into the store instead of the users home directory.
The package list was extracted into appimageTools to prevent
duplication.
Since #53055 was merged the Makefile for the manual could not be run
correctly as the generated function documentation was included, but
not actually generated.
This adds the necessary generation step by first building the XML file
containing function locations and preserving its store path in a
variable, which is then used both for linking of the locations file
and as a build input for the function docs generator.
This fixes#55014
Comments on conflicts:
- llvm: d6f401e1 vs. 469ecc70 - docs for 6 and 7 say the default is
to build all targets, so we should be fine
- some pypi hashes: they were equivalent, just base16 vs. base32
This is useful when running tools like NixOps or nix-review
on workstations where the upload to the builder is significantly
slower then downloading the source on the builder itself.
We can't run the checkPhase when build != host, so we may as well make
the checkInputs native.
This signicantly improves the situation of Python packages when enabling
strictDeps.
Currently the manual scales to the view port of the browser.
This leads to an unreadable layout and I found myself
reading the xml source instead.
The optimal width would be around 50 characters per line.
Since we have code listings also in the manual I relaxed
this limit a bit towards 70 characters per line.
Modifies the build process of the manual to invoke nixdoc
automatically to generate XML files with function documentation.
Currently documentation is present for five of the files in `lib/`.
To add another file to the generated docs, both
`doc/functions/library.xml` and `doc/lib-function-docs.nix` must be
updated.
Since Intel's default openmp implementation is available in the same src
tarball, we can just include it in the package. This means that `mkl` now "just
works" without any environment variables, fragile setup-hooks, or forced
propagation.
Since the openmp implementation is only needed at runtime (and for test cases),
users can substitute a different one if they prefer by exporting it with
`LD_PRELOAD`, which is how Intel recommends handling this. If they do not do so,
`libiomp.so` lives next to `libmkl_rt.so` and thus will be in the RPATH as a
sane default.
Since this still comes from the same src tarball, we can ship it without losing
the fixed-output derivation; likewise, since Hydra is not building or caching
these, shipping these proprietary packages costs no bandwidth for the nix
community.