This adds -frandom-seed to each compiler invocation in stdenv. The
object here is to make the compierl invocations produce the same output
every time they are called (for the same derivation). When the
-frandom-seed option is not set the compiler will use a combination of
random numbers (in GCC's case from /dev/urandom) and the durrent time to
produce a "random" input per file. This can (among other things) lead to
different ordering of symbols in the produced object files.
For reason of reproducibility we prefer having the same derivation
produce the exact same outputs. This is not a silver bullet but one way
to tame the compiler.
I made a mistake merge. Reverting it in c778945806 undid the state
on master, but now I realize it crippled the git merge mechanism.
As the merge contained a mix of commits from `master..staging-next`
and other commits from `staging-next..staging`, it got the
`staging-next` branch into a state that was difficult to recover.
I reconstructed the "desired" state of staging-next tree by:
- checking out the last commit of the problematic range: 4effe769e2
- `git rebase -i --preserve-merges a8a018ddc0` - dropping the mistaken
merge commit and its revert from that range (while keeping
reapplication from 4effe769e2)
- merging the last unaffected staging-next commit (803ca85c20)
- fortunately no other commits have been pushed to staging-next yet
- applying a diff on staging-next to get it into that state
Teach installShellCompletion how to install completions from a named
pipe. Also add a convenience flag `--cmd NAME` that synthesizes the name
for each completion instead of requiring repeated `--name` flags.
Usage looks something like
installShellCompletion --cmd foobar \
--bash <($out/bin/foobar --bash-completion) \
--fish <($out/bin/foobar --fish-completion) \
--zsh <($out/bin/foobar --zsh-completion)
Fixes#83284
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
Revert the hack and the original faulty commit.
This reverts commit 48264ee506.
Revert "Purity checking should accept $TMP and not just /tmp"
This reverts commit fb777be7d2.
This adds an option to skip adding --copt and --linkopt to Bazel
flags. In some cases, Bazel doesn’t like these flags, like when a
custom toolchain is being used (as opposed to the builtin one. The
compiler can still get the dependencies since it is invoked through
gcc wrapper and picks up the NIX_CFLAGS_COMPILE / NIX_LDFLAGS.
This enables short argument attrsets similar to fetchPypi:
src = fetchCrate {
inherit pname version;
sha256 = "02h8pikmk19ziqw9jgxxf7kjhnb3792vz9is446p1xfvlh4mzmyx";
};
Before this commit, the firmware information would be loaded from the
currently running kernel, not from the kernel to be loaded.
This commit ensures the correct kernel version and modules are read.
After a recent upgrade of modinfo, its output is now incorrect for
builtin modules. This commit filters out the output until a fix is made
available upstream
symlinkJoin can break (silently) when the passed paths contain symlinks
to directories. This should work now.
Down-side: when lib/tmpfiles.d doesn't exist for some passed package,
the error message is a little less explicit, because we never get
to the postBuild phase (and symlinkJoin doesn't provide a better way):
/nix/store/HASH-NAME/lib/tmpfiles.d: No such file or directory
Also, it seemed pointless to create symlinks for whole package trees
and using only a part of the result (usually very small part).
* writers.makeScriptWriter: fix on Darwin\MacOS
On Darwin a script cannot be used as an interpreter in a shebang line, which
causes scripts produced with makeScriptWriter (and its derivatives) to fail at
run time if the used interpreter was wrapped with makeWrapper (as in the case
of python3.withPackages).
This commit fixes the problem by detecting if the interpreter is a script
and prepending its shebang to the final interpreter line.
For example if used interpreter is;
```
/nix/store/ynwv137n2650qy39swcflxbcygk5jwv1-python3-3.8.3-env/bin/python
```
which is a script with following shebang:
```
#! /nix/store/knd85yc7iwli8344ghav3zli8d9gril0-bash-4.4-p23/bin/bash -e
```
then the shebang line in the produced script will be
```
#! /nix/store/knd85yc7iwli8344ghav3zli8d9gril0-bash-4.4-p23/bin/bash -e /nix/store/ynwv137n2650qy39swcflxbcygk5jwv1-python3-3.8.3-env/bin/python
```
This works on Darwin since there does not seem to be a limit to the length
of the shabang line and the shebang lines support multiple arguments to
the interpreters (as opposed to linux where the kernel imposes a strict limit
on shebang lengh and everything following the interpreter is passed to it
as a single string).
fixes; #93609
related to: #65351#11133 (and probably a bunch of others)
NOTE: scripts produced on platforms other than Darwin will remain unmodified
by this PR. However it might worth considering extending this fix to BSD systems
in general. I didn't do it since I have no way of testing it on systems other
than MacOS and linux.
* writers.makeScriptWriter: fix typo in comment
* writers.makeScriptWriter: fail build if interpreter of interpreter is a script
"bazel fetch" will, by default, fetch everything that _might_ be used,
including things that will later be discarded due to the way the build
is configured.
Concretely, this means that for some builds of Java packages, this will
avoid failures where the builder tries to retrieve the JDK from /usr/share/java
(or equivalent).
This also means that for most packages we can fetch _fewer_ dependencies,
since the standard tree pruning for artifacts to fetch will take effect.
fetchConfigured is disabled by default since it changes the fetch hashes
of tensorflow/tensorflow2 (since it ends up fetching less).
The previous code discarded entire dependency trees if the first entry in the dependency list compiled by `modprobe --show-depends` is a builtin and otherwise handled its output in a rather hackish way.
While the artifacts from `buildPhase` should be used for testing as
well, it should be avoided that those are modified during `checkPhase`.
This can happen if a package is built e.g. with special
`cargoBuildFlags` that don't apply to the `checkPhase`. In that case, a
binary would be installed into `$out` without those flags since
`checkPhase` overrides the binary in the `target`-directory.
This patch copies the state of `target/release` into a temporary
location at the end of the `buildPhase` and installs the results from
that temporary directory into `$out` while `checkPhase` can continue
using the configured build-dir.
cc #91689Closes#93119Closes#91191
The image tag can be specified or generated from the output hash.
Previously, a generated tag could be recovered from the evaluated
image with some string operations.
However, with the introduction of streamLayeredImage, it's not
feasible to compute the generated tag yourself.
With this change, the imageTag attribute is set unconditionally,
for the buildImage, buildLayeredImage, streamLayeredImage functions.
We need to set FC so that CMake and other tools can find the fortran
compiler. Also we need to limit the hardening flags since fortify and
format don’t work with fortran.
Fixes#88449
When features were supplied in cargoBuildFlags, the binaries were built
with these features enabled. Unless checking was disabled, `cargo test`
was executed without these build flags, meaning the binaries were
rebuilt and overwritten without the specified features.
Fix this bug by running tests after the installation phase.
- New parameter `extraDesktopEntries` to easily add some less usual entries to the desktop file
- Rewrite of the core logic. Instead of a key-value-list, use an attribute set with nullable values to make it overridable
- Added some comments
- Some cosmetic/readability code refactors
- I didn't like the doubly nested strings around the `fileValidation`
This is supposed to shareDocName to a fallback value if it can't be
determined from looking at the configure script. But the conditional
checked whether shareDocName was set, rather than if it wasn't. This
meant that if shareDocName had been detected from a configure script,
it would be immediately overridden by the package name, and if it
couldn't be detected, shareDocName would remain unset.
This resulted in QEMU installing files like $out/share/doc/index.html,
which should of course have been in $out/share/doc/qemu/index.html.
An interesting side effect of this is that, since
9f8751528c when this code was added, the
detected package name has never actually been used for installing
documentation, because it would always be overridden. So this patch
will actually enable that for the first time, four years later.
Fixes: https://github.com/NixOS/nixpkgs/issues/90486
Cargo sets `CARGO_FEATURE_*` for all features when running a build
script:
https://doc.rust-lang.org/cargo/reference/environment-variables.html#environment-variables-cargo-sets-for-build-scripts
Some crates have build scripts (e.g. openblas-src) that rely on the
feature variables being properly set.
Since we now need several representations of features, this change
also updates `createFeatures` to be a list of features, rather than
`rustc` feature arguments. `configureCrate` and `buildCrate` then
build the required representations as-needed.
Fixes#68978
When building an environment if two paths conflict but one or both are
symbolic links and they resolve to the same real path, the conflict is
discarded because the contents of both paths are the same. One of them
is chosen and there is no need to recur into them in order to build
deeper symbolic links.
We can use cacert to validate that the data passes SSL certificates.
Normally, this doesn’t happen because we already have the hash, but in
the hash = "" case we don’t.
xchg is advertised as a bidirectional exchange dir, but file content
transfer from host to VM fails due to caching:
If a file is read in the VM and then modified on the host, subsequent
re-reads in the VM can yield old, cached data.
This is caused by the use of 9p's cache=loose mode that is explicitly
meant for read-only mounts.
9p doesn't provide any suitable cache modes, so fix this by disabling
caching.
Also, remove a now unnecessary sync in the test driver.
This adds the `validatePkgConfig` hook, which can be used to validate
pkg-config files in the output(s). Currently, this will just run
`pkg-config --validate` on all `.pc` files, capturing errors such as
the issue that was fixed in #87789.
The hook could be extended in the future with more fine-grained
checks.
In https://github.com/NixOS/nixpkgs/pull/58431 the authors ensured that
the resulting layer.tar would always list
/nix/
/nix/store/
first to fully comply to the tar spec. Various refactorings later it is only
ensured to create /nix/ but NOT /nix/store anymore. Instead tar transformed
them to /nix/nix and /nix/nix/store.
This is much better because then we can freely keep the comments up to
date without causing mass rebuilds.
Someday, somebody should make the same change with `cc-wrapper` and
`bintools-wrapper`.
There are several tarballs (such as the `rust-lang/rust`-source) with a
`Cargo.toml` at root and several sub-packages (with their own Cargo.toml)
without using workspaces[1].
In such a case it's needed to move into a subdir to only build the
specified sub-package (e.g. `rustfmt` or `rsl`), however the artifacts
are at `/target` in the root-dir of the build environment. This breaks
the build since `buildRustPackage` searches for executables in `target`
(which is at the build-env's root) at the end of the `buildPhase`.
With the optional `buildAndTestSubdir`-argument, the builder moves into
the specified subdir using `pushd`/`popd` during `buildPhase` and
`checkPhase`.
Also moved the logic to find executables and libs to the end of the `buildPhase`
from a custom `postBuild`-hook to fix packages with custom `build`/`install`-procedures
such as `uutils-coreutils`.
[1] https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html
I hate the thing too even though I made it, and rather just get rid of
it. But we can't do that yet. In the meantime, this brings us more
inline with autoconf and will make it slightly easier for me to write a
pkg-config wrapper, which we need.
Some PECLs depend on other PECLs and, like internal PHP extension
dependencies, need to be loaded in the correct order. This makes this
possible by adding the argument "peclDeps" to buildPecl, which adds
the extension to buildInputs and is treated the same way as
internalDeps when the extension config is generated.
flat hashes can be substituted through hashed-mirrors, while recursive
hashes can’t. This is especially important for Bazel since the bazel
fetch dependencies can come from multiple different methods (git,
http, ftp, etc.). To do this, we create tar archives from the
output/external directory, which is then extracted to build. All of
the Bazel hashes are all updated.
If a user provides `nativeBuildInputs = [ llvmPackages.bintools ]` or any other
package containing a `${prefix}/bin/diff`, the builder could use it instead
of the standard unix `diff`, causing a build failure.
This updates the call to specify an abspath to `diff` and avoid reliance on `PATH`.
Resolves#87081
Calculating the tarsum after creating a layer is inefficient, since
we have to read the tarball we've just written from the disk.
This commit simultaneously calculates the tarsum while creating the
tarball.
Appending to an existing tar archive repeatedly seems to be a quadratic
operation, since tar seems to traverse the existing archive even using
the `-r, --append` flag. This commit avoids that by passing the list of
files to a single tar invocation.
When running `cargo test --release`, the artifacts from `buildPhase`
will be reused here. Previously, most of the stuff had to be recompiled
without optimizations.
The only reason to pass build inputs is to extend the unpackPhase with
custom unpack commands. Eg: add "unrar" to unpack rar sources. And those
should really be passed as native build inputs. Why? Because
nativeBuildInputs is for dependencies that are used at build time but
will not propagate as runtime dependencies. And also, cross-compilation.
The build system already sets these properly to the absolute path so no
need to patch the libraries on darwin.
$ otool -D result/lib/liblapacke.dylib
result/lib/liblapacke.dylib:
/nix/store/k88gy5s765yn3dc5ws3jbykyvklm7z96-openblas-0.3.8/lib/libopenblasp-r0.3.8.dylib
Fixes#85713
Previously, callPackage would try and fill the arguments such as `name`
and `src` which would cause problems if those existed as top-level
attributes. This also makes it clearer what part is the function
signature.
Then document the derivation inline in the code to explain the ellipsis
and various use-cases.
This reverts commit b32a057425,
which breaks even the most straightforward uses of srcOnly:
nix-repl> srcOnly guile
error: anonymous function at /home/src/nixpkgs/pkgs/build-support/src-only/default.nix:1:1 called with unexpected argument 'drvPath', at /home/src/nixpkgs/lib/customisation.nix:69:16
nix-repl> srcOnly hello
error: anonymous function at /home/src/nixpkgs/pkgs/build-support/src-only/default.nix:1:1 called with unexpected argument 'drvPath', at /home/src/nixpkgs/lib/customisation.nix:69:16
Link: https://github.com/NixOS/nixpkgs/pull/80903#issuecomment-617172927
This is a better name since we have multiple 64-bit things that could
be referred to.
LP64 : integer=32, long=64, pointer=64
ILP64 : integer=64, long=64, pointer=64