When separateDebugInfo = true & !stdenv.hostPlatform.isLinux, we
always gave an error. There is no reason to do this. Instead, just
don’t add separate debug info when we aren’t on Linux.
propagateNativeBuildInputs will end up going in the output derivation.
This case is allowed to end up in references because of that. Sorry
for the disruption!
Fixes #50865
Completely breaks darwin. Every package in the stdenv that has shebangs
in the output will end up with references to bootstrap-tools.
This reverts commit eb7c50a993.
Completely breaks darwin. Every package in the stdenv that has shebangs
in the output will end up with references to bootstrap-tools.
This reverts commit 2f2e635dd5.
This will make the list much easier to re-use, eg. for `nixosTests`
The drawback is that this approaches makes the
```
nix-build release.nix -A tests.opensmtpd.x86_64-linux
```
command about twice as slow (3s to 6s): it now has to evaluate `nixpkgs`
once for each architecture, instead of just having the hardcoded list of
tests that allowed to say “ok just evaluate for x86_64-linux”.
On the other hand, complete evaluation of `release.nix` should be much
faster because we no longer import `nixpkgs` for each test: testing with
the following command went from 30s to 18s, and that's just for a few
tests.
```
time nix-instantiate --eval --strict nixos/release.nix -A tests.nat
```
I initially wanted to test on the whole `release.nix`, but there are too
many broken tests and it takes too long to eval them all, especially
compared to the fact that the current implementation breaks some setup.
Given developers can just `nix-build nixos/tests/my-test.nix`, it sounds
like an overall win.
Fixes #49071
On ld.gold, we produce broken executables when linking with the Musl
libc. This appears to be a known bug when using ld.gold and Musl. This
thread describes the workaround as enabling PIE when using ld.gold and
Musl:
https://www.openwall.com/lists/musl/2015/05/01/5
By default we don’t enable PIE to avoid breaking things. But in the
Musl case we are breaking things by not enabling PIE. So this adds a
special case for defaultHardeningFlags which keeps the pie hardening
for everything. Any packages that break with PIE can add the pie flag
to disableHardeningFlags array (a no-op for now on anything but Musl).
When strictDeps = true, we don’t want native build inputs to end up in
the output. For instance gcc is a builtin native build input and
should only show up in an output if it is also listed in buildInputs.
/cc @ericson2314
In strictDeps=false, autoPatchshebangs should use
--build (corresponding to PATH) to lookup commands. This restores the
previous behavior of patchshebangs so that we don’t break stuff that
isn’t careful in the buildInputs vs. nativeBuildInputs distinction.
Unfortunately this won’t work under cross compilation.
Rationale
---------
Currently, tests are hard to discover. For instance, someone updating
`dovecot` might not notice that the interaction of `dovecot` with
`opensmtpd` is handled in the `opensmtpd.nix` test.
And even for someone updating `opensmtpd`, it requires manual work to go
check in `nixos/tests` whether there is actually a test, especially
given not so many packages in `nixpkgs` have tests and this is thus most
of the time useless.
Finally, for the reviewer, it is much easier to check that the “Tested
via one or more NixOS test(s)” has been checked if the file modified
already includes the list of relevant tests.
Implementation
--------------
Currently, this commit only adds the metadata in the package. Each
element of the `meta.tests` attribute is a derivation that, when it
builds successfully, means the test has passed (ie. following the same
convention as NixOS tests).
Future Work
-----------
In the future, the tools could be made aware of this `meta.tests`
attribute, and for instance a `--with-tests` could be added to
`nix-build` so that it also builds all the tests. Or a `--without-tests`
to build without all the tests. @Profpatsch described in his NixCon talk
such systems.
Another thing that would help in the future would be the possibility to
reasonably easily have cross-derivation nix tests without the whole
NixOS VM stack. @7c6f434c already proposed such a system.
This RFC currently handles none of these concerns. Only the addition of
`meta.tests` as metadata to be used by maintainers to remember to run
relevant tests.
Uses uname data to find what to set these variables:
- CMAKE_SYSTEM_NAME
- CMAKE_SYSTEM_PROCESSOR
- CMAKE_SYSTEM_VERSION
- CMAKE_HOST_SYSTEM_NAME
- CMAKE_HOST_SYSTEM_PROCESSOR
- CMAKE_HOST_SYSTEM_VERSION
/cc @Ericson2314
PR was https://github.com/NixOS/nixpkgs/pull/46857
This line broke MacOS cross compilation. paxctl cannot be built on
macOS. Maybe it can be fixed, but no reason to break things
unnecessarily.
Regardless, you definitely need to be more careful about backporting.
I think it’s fine to move fast and break things on master but
with release-18.09 we should be more careful. Something like more
automated testing for cross compilation would also be
helpful (hopefully even making it block).
(cherry picked from commit f9c4075873)
The isELF function only checks whether ELF is contained within the first
4 bytes of the file, which is a bit fuzzy and will also return
successful if it's a text file starting with ELF, for example:
ELF headers
-----------
Some text here about ELF headers...
So instead, we're now doing a precise match on \x7fELF.
Signed-off-by: aszlig <aszlig@nix.build>
Acked-by: @Ericson2314
Closes: https://github.com/NixOS/nixpkgs/pull/47244
Since gcc.lib/lib64 is a symlink to 'lib', the use of
"lib*/libgcc_s.so*" triggered a warning (error) with
the latest coreutils. Essentially we were doing:
$ cp a/x b/x y/
And latest coreutils rejects such invocations.
Just copy from 'lib', lib64 is a link to it anyway.
* Nothing else in this file bothers looking at lib*
* AFAICT lib* only ever possibly matched lib64 anyway
02c09e0171 (NixOS/nixpkgs#44558) was reverted in
c981787db9 but, as it turns out, it fixed an issue
I didn't know about at the time: the values of `propagateDoc` options were
(and now again are) inconsistent with the underlying things those wrappers wrap
(see NixOS/nixpkgs#46119), which was (and now is) likely to produce more instances
of NixOS/nixpkgs#43547, if not now, then eventually as stdenv changes.
This patch (which is a simplified version of the original reverted patch) is the
simplest solution to this whole thing: it forces wrappers to directly inspect the
outputs of the things they are wrapping instead of making stdenv guess the correct
values.
The `unfree` and `unfreeRedistributable` licenses both have `free = false`,
which will trigger the first portion of logic. This removes dead code to
simplify the logic.
As a follow-up, I plan to add an attribute `redistributable = [true|false]`,
which can be used by Hydra to determine whether a given package with a given
license can be included in the channel.
This accidentally added some unwanted dependencies on the bootstrap
tools, and I don't have time to fix before I go on vacation, so I'm
backing it out until I have time to address it properly.
This reverts commit dc5c68a7bb.
If meta.outputsToInstall is set to include absent outputs, various
tools break including channel updates and nix-env.
grahamc@Morbo> nix-env -i -f . -A elf-header-real
installing 'elf-header'
error: this derivation has bad 'meta.outputsToInstall'
This patch verifies each value in meta.outputsToInstall is a valid
output. It validates this condition only if checkMeta is true.
grahamc@Morbo> nix-build . -A elf-header-real
error: Package ‘elf-header’ in /home/grahamc/projects/nixpkgs/pkgs/development/libraries/elf-header/default.nix:36 has invalid meta.outputsToInstall, refusing to evaluate.
The package elf-header has set meta.outputsToInstall to: bin
however elf-header only has the outputs: out
and is missing the following ouputs:
- bin
(use '--show-trace' to show detailed location information)
Note, now the nix-env experience is decidedly worse for users who have
checkMeta set to true:
grahamc@Morbo> nix-env -i -f . -A elf-header-real; echo $?
0
though since this is already an issue for unfree, broken, unsupported,
and insecure validity problems I'm not sure we should do something
different here.
a4630c65ca was incorrect in assuming $SHELL would be a path to the
bash derivation. In fact $SHELL will be a path to the bash executable.
Unfortunately this did not fix the original issue. So instead, we just
have to reuse initialPath can be added like PATH is.
Sorry for the inconvenience! I hadn’t thought through the effects of
the last commit.
/cc @copumpkin @ericson2314
To avoid breaking things, we need to make sure SHELL goes into
HOST_PATH. This reflects my changes to patch-shebangs to make it cross
compilation ready. When a script is patched from the Nix store it now
looks to HOST_PATH to get the targeted machine’s executables.
Unfortunately, this only works in native builds.
LTO is disabled during bootstrap to keep the bootstrap tools small and
avoid unnecessary LLVM rebuilds, but is enabled in the final stdenv
stage and should be usable by normal packages.
This also updates the bootstrap tool builder to LLVM 5, but not the ones
we actually use for bootstrap. I'll make that change in a subsequent commit
so as to provide traceable provenance of the bootstrap tools.
Intuitively, one cares mainly about the host platform: Platforms differ
in meaningful ways but compilation is morally a pure process and
probably doesn't care, or those difference are already abstracted away.
@Dezgeg also empirically confirmed that > 95% of checks are indeed of
the host platform.
Yet these attributes in the old cross infrastructure were defined to be
the build platform, for expediency. And this was never before changed.
(For native builds build and host coincide, so it isn't clear what the
intention was.)
Fixing this doesn't affect native builds, since again they coincide. It
also doesn't affect cross builds of anything in Nixpkgs, as these are no
longer used. It could affect external cross builds, but I deem that
unlikely as anyone thinking about cross would use more explicit
attributes for clarity, all the more so because the rarity of inspecting
the build platform.
Works similarly to `enableParallelBuilding`, but is set by default when
`enableParallelBuilding` is set. In my experience most packages that build
fine in parallel also check fine in parallel.
Derivations where drawing their `system` attribute from `hostPlatform`
instead of `buildPlatform`. Fix that, and add an explanatory commment.
Fixes #45993
This has been not touched in 6 years. Let's remove it to cause less
problems when adding new cross-compiling infrastructure.
This also simplify gcc significantly.
This reverts commit a809fdc8e1 and then
achieves the same result (not rebuilding texinfo three times)
but without dragging bootstrap tools into the closure.
I *want* cross-specific overrides to be verbose, so I rather not have
this shorthand. This makes the syntactic overhead more proportional to
the maintainence cost. Hopefully this pushes people towards fewer
conditionals and more abstractions.
On darwin llvmPackages is built using python-boot to avoid dependencies
in the stdenv, but we can't and shouldn't use that when building the
manpages since it depends on python packages.
* substitute(): --subst-var was silently coercing to "" if the variable does not exist.
* libffi: simplify using `checkInputs`
* pythonPackges.hypothesis, pythonPackages.pytest: simpify dependency cycle fix
* utillinux: 2.32 -> 2.32.1
https://lkml.org/lkml/2018/7/16/532
* busybox: 1.29.0 -> 1.29.1
* bind: 9.12.1-P2 -> 9.12.2
https://ftp.isc.org/isc/bind9/9.12.2/RELEASE-NOTES-bind-9.12.2.html
* curl: 7.60.0 -> 7.61.0
* gvfs: make tests run, but disable
* ilmbase: disable tests on i686. Spooky!
* mdds: fix tests
* git: disable checks as tests are run in installcheck
* ruby: disable tests
* libcommuni: disable checks as tests are run in installcheck
* librdf: make tests run, but disable
* neon, neon_0_29: make tests run, but disable
* pciutils: 3.6.0 -> 3.6.1
Semi-automatic update generated by https://github.com/ryantm/nixpkgs-update tools. This update was made based on information from https://repology.org/metapackage/pciutils/versions.
* mesa: more include fixes
mostly from void-linux (thanks!)
* npth: 1.5 -> 1.6
minor bump
* boost167: Add lockfree next_prior patch
* stdenv: cleanup darwin bootstrapping
Also gets rid of the full python and some of it's dependencies in the
stdenv build closure.
* Revert "pciutils: use standardized equivalent for canonicalize_file_name"
This reverts commit f8db20fb3a.
Patching should no longer be needed with 3.6.1.
* binutils-wrapper: Try to avoid adding unnecessary -L flags
(cherry picked from commit f3758258b8895508475caf83e92bfb236a27ceb9)
Signed-off-by: Domen Kožar <domen@dev.si>
* libffi: don't check on darwin
libffi usages in stdenv broken darwin. We need to disable doCheck for that case.
* "rm $out/share/icons/hicolor/icon-theme.cache" -> hicolor-icon-theme setup-hook
* python.pkgs.pytest: setupHook to prevent creation of .pytest-cache folder, fixes #40273
When `py.test` was run with a folder as argument, it would not only
search for tests in that folder, but also create a .pytest-cache folder.
Not only is this state we don't want, but it was also causing
collisions.
* parity-ui: fix after merge
* python.pkgs.pytest-flake8: disable test, fix build
* Revert "meson: 0.46.1 -> 0.47.0"
With meson 0.47.0 (or 0.47.1, or git)
things are very wrong re:rpath handling
resulting in at best missing libs but
even corrupt binaries :(.
When we run patchelf it masks the problem
by removing obviously busted paths.
Which is probably why this wasn't noticed immediately.
Unfortunately the binary already
has a long series of paths scribbled
in a space intended for a much smaller string;
in my testing it was something like
lengths were 67 with 300+ written to it.
I think we've reported the relevant issues upstream,
but unfortunately it appears our patches
are what introduces the overwrite/corruption
(by no longer being correct in what they assume)
This doesn't look so bad to fix but it's
not something I can spend more time on
at the moment.
--
Interestingly the overwritten string data
(because it is scribbled past the bounds)
remains in the binary and is why we're suddenly
seeing unexpected references in various builds
-- notably this is is the reason we're
seeing the "extra-utils" breakage
that entirely crippled NixOS on master
(and probably on staging before?).
Fixes #43650.
This reverts commit 305ac4dade.
(cherry picked from commit 273d68eff8)
Signed-off-by: Domen Kožar <domen@dev.si>
`depsHostBuild` is not a thing, would never be a thing per the rules,
and isn't used anywhere. This is just my typo, hitherto unnoticed
because "host -> host" dependencies are by far the most obscure form.
HOST_PATH contains the path of the host package. This will include the
packages listed in buildInputs & depsHostHost. Use this to find
runtime commands that the host needs.
For instance to find the runtime version of perl,
$ PATH="$HOST_PATH" command -v perl
/nix/store/...-perl-5.28.0-aarch64-unknown-linux-android/bin/perl
This path should not be executed directly (it will break for cross
compilation). Only use it to find the location of executables that
will be run by your host system. Your build tools will, as always, be
available on the default PATH.
The line was essentially checking whether /bin/sh exists and is
executable and if that's the case, the isScript function returns
successfully.
When asking the author of this line on IRC it seems that even they can't
remember or imagine what this was supposed to be.
In summary: Whenever /bin/sh doesn't exist during a build, *any* file
given to isScript is reported as being a script even if it isn't.
This is kinda counter-intuitive and not something what somebody would
expect from a function called "isScript".
Signed-off-by: aszlig <aszlig@nix.build>
Cc: @edolstra
Not only does the suffix unnecessarily reduce sharing, but it also breaks
unpacker setup hooks (e.g. that of `unzip`) which identify interesting tarballs
using the file extension.
This also means we can get rid of the splicing hacks for fetchers.
It wasn’t exactly clear which NDK you were using previously. This adds
an attribute to system that handles what version of the NDK we should
use when building things.
/cc @Ericson2314
We already did them on non-mass-rebuild llvm 6. Also, this allows
simplifying the stdenv booting.
We were missing the libcxxabi dep in compile-rt in llvm 6, so fixed that
too.
It may seem nice and abstract to just override the default version, but
that breaks the alias relationship where the original llvmPackages_* is
no longer in sync. Put another away, modifying the referee rather
instead of breaking the reference "copy-on-write" is impossible.
We want `buildPackages` to be almost the same as
`buildPackages.buildPackges`, but that is only true if most packages
don't care about the target platform. The commented code however made
them all care about whether the target platform was Darwin.
The hack of using `crossConfig` to enforce stricter handling of
dependencies is replaced with a dedicated `strictDeps` for that purpose.
(Experience has shown that my punning was a terrible idea that made more
difficult and embarrising to teach teach.)
Now that is is clear, a few packages now use `strictDeps`, to fix
various bugs:
- bintools-wrapper and cc-wrapper
Note that a bunch of non-python packages use this attribute already.
Some of those are clearly unaware of the fact that this attribute does
not exists in stdenv because they define it but don't to add it to
their `bulidInputs` :)
Also note that I use `buildInputs` here and only handle regular
builds because python and haskell builders do it this way and I'm not
sure how to properly handle the cross-compilation case.
As in:
$ nix eval -f . bash
Also remove the glibc propagation inherit that made these necessary,
stages handle propagating libc themselves (apparently) and
AFAICT no hashes are changed as a result of this.
Following legacy packing conventions, `isArm` was defined just for
32-bit ARM instruction set. This is confusing to non packagers though,
because Aarch64 is an ARM instruction set.
The official ARM overview for ARMv8[1] is surprisingly not confusing,
given the overall state of affairs for ARM naming conventions, and
offers us a solution. It divides the nomenclature into three levels:
```
ISA: ARMv8 {-A, -R, -M}
/ \
Mode: Aarch32 Aarch64
| / \
Encoding: A64 A32 T32
```
At the top is the overall v8 instruction set archicture. Second are the
two modes, defined by bitwidth but differing in other semantics too, and
buttom are the encodings, (hopefully?) isomorphic if they encode the
same mode.
The 32 bit encodings are mostly backwards compatible with previous
non-Thumb and Thumb encodings, and if so we can pun the mode names to
instead mean "sets of compatable or isomorphic encodings", and then
voilà we have nice names for 32-bit and 64-bit arm instruction sets
which do not use the word ARM so as to not confused either laymen or
experienced ARM packages.
[1]: https://developer.arm.com/products/architecture/a-profile
(cherry picked from commit ba52ae5048)
Following legacy packing conventions, `isArm` was defined just for
32-bit ARM instruction set. This is confusing to non packagers though,
because Aarch64 is an ARM instruction set.
The official ARM overview for ARMv8[1] is surprisingly not confusing,
given the overall state of affairs for ARM naming conventions, and
offers us a solution. It divides the nomenclature into three levels:
```
ISA: ARMv8 {-A, -R, -M}
/ \
Mode: Aarch32 Aarch64
| / \
Encoding: A64 A32 T32
```
At the top is the overall v8 instruction set archicture. Second are the
two modes, defined by bitwidth but differing in other semantics too, and
buttom are the encodings, (hopefully?) isomorphic if they encode the
same mode.
The 32 bit encodings are mostly backwards compatible with previous
non-Thumb and Thumb encodings, and if so we can pun the mode names to
instead mean "sets of compatable or isomorphic encodings", and then
voilà we have nice names for 32-bit and 64-bit arm instruction sets
which do not use the word ARM so as to not confused either laymen or
experienced ARM packages.
[1]: https://developer.arm.com/products/architecture/a-profile
This allows one to force a compiler to use native machine optimizations. This
goes contrary to all the usual guarantees of Nix and so should be used only by
end-user and only in specific cases when they know what are they doing.
In my case this is needed to get a noticeable FPS boost in RPCS3 which is very
CPU-hungry PlayStation 3 emulator.
- `localSystem` is added, it strictly supercedes system
- `crossSystem`'s description mentions `localSystem` (and vice versa).
- No more weird special casing I don't even understand
TEMP
Since at least d7bddc27b2, we've had a
situation where one should depend on:
- `stdenv.cc.bintools`: for executables at build time
- `libbfd` or `libiberty`: for those libraries
- `targetPackages.cc.bintools`: for exectuables at *run* time
- `binutils`: only for specifically GNU Binutils's executables,
regardless of the host platform, at run time.
and that commit cleaned up this usage to reflect that. This PR flips the
switch so that:
- `binutils` is indeed unconditionally GNU Binutils
- `binutils-raw`, which previously served that role, is gone.
so that the correct usage will be enforced going forward and everything
is simple.
N.B. In a few cases `binutils-unwrapped` (which before and now was
unconditionally actual GNU binutils), rather than `binutils` was used to
replace old `binutils-raw` as it is friendly towards some cross
compilation usage by avoiding a reference to the next bootstrapping
change.
First, we need check against the host platform, not the build platform.
That's simple enough.
Second, we move away from exahustive finite case analysis (i.e.
exhaustively listing all platforms the package builds on). That only
work in a closed-world setting, where we know all platforms we might
build one. But with cross compilation, we may be building for arbitrary
platforms, So we need fancier filters. This is the closed world to open
world change.
The solution is instead of having a list of systems (strings in the form
"foo-bar"), we have a list of of systems or "patterns", i.e. attributes
that partially match the output of the parsers in `lib.systems.parse`.
The "check meta" logic treats the systems strings as an exact whitelist
just as before, but treats the patterns as a fuzzy whitelist,
intersecting the actual `hostPlatform` with the pattern and then
checking for equality. (This is done using `matchAttrs`).
The default convenience lists for `meta.platforms` are now changed to be
lists of patterns (usually a single pattern) in
`lib/systems/for-meta.nix` for maximum flexibility under this new
system.
Fixes #30902
Resolved the following conflicts (by carefully applying patches from the both
branches since the fork point):
pkgs/development/libraries/epoxy/default.nix
pkgs/development/libraries/gtk+/3.x.nix
pkgs/development/python-modules/asgiref/default.nix
pkgs/development/python-modules/daphne/default.nix
pkgs/os-specific/linux/systemd/default.nix
We go out of our way (see top of file) to build a single binary
with symlinks for all of the tools, but were losing them
when preparing the bootstrap tools.
For the cc of the intermediate stages, to be precise. Doing the same for
bintools requires lots of refactoring.
This is mainly for the future extensibility as now you can change
documentation generation with impunity without rebuilding the
whole of stdenv.
Existing "mips64el" should be "mipsel".
This is just the barest minimum so that nixpkgs can recognize them as
systems - although required for building individual derivations onto
MIPS boards, it is not sufficient if you want to actually build nixos on
those targets
Aarch64 tools tested briefly with qemu-aarch64,
but neither have been actually used yet :).
For now only "host" indirectly via binary cache
at cache.allvm.org.
This is a temporary workaround to make `nix-env -qa` and `nix search` ignore
broken packages as they they did before this patchset.
This patch should be reverted after `nix` gets a proper fix for this.
See NixOS/nix#1771.
This option makes `meta.evaluate` into a close approximation of the result of
evaluating `.outPath` by checking all the dependencies recursively at a cost of
2x slowdown. Note that actually evaluating `.outPath` costs some
5x-7x more because `.outPath` also computes all the hashes.