Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.
Rewrite the `stripHash` helper function with 2 differences:
* Paths starting with `--` will no longer produce an error.
* Use Bash string manipulation instead of shelling out to `grep` and
`cut`. This should be faster.
A bunch of stdenv-internal variables were deleted in
1601a7fcce, but these are needed in the
fixup phase, whereas the rest are just needed for the initial work
(findInputs, etc) before the user phases.
CC @matthewbauer
There were two issues:
* builtins.getEnv was called deep into the nixpkgs tree making it hard
to discover. This is solved by moving the call into
pkgs/top-level/impure.nix
* when the config was explicitly set by the user to false, it would
still try and load the environment variable. This meant that it was
not possible to guarantee the same outcome on two different systems.
Apparently this option trades compression time for size,
and explicitly does so without increasing resources needed in decomp.
Doesn't make tarball creation unbearable, so add it to options!
Before, we very carefully unapplied and reapplied `set -u` so the rest
of Nixpkgs could continue to not fail on undefined variables. Let's rip
off the band-aid.
Go beyond the obvious setup hooks now, with a bit of sed, with a skipped case:
- cc-wrapper's `dontlink`, because it already is handled.
Also, in nix files escaping was manually added.
EMP
A package's meta.license can either be a single license or a list. The
code to check config.whitelistedLicenses and config.blackListedLicenses
wasn't handling this, nor was the showLicense function.
This logic should be in the pkgs/top-level/static.nix. We don’t want
to pollute Nixpkgs with =if stdenv.static=. Also, "static" is not
descriptive. We have two types of static stdenvs, ‘makeStaticLibaries’
and ‘makeStaticBinaries’. We shouldn’t rely on a static boolean like
this.
These can be used to determine whether a ELF file with ELF header is an
executable or shared library.
We can't implement it in pure bash, as bash has problems with null
bytes.
That's very much consistent with the spirit of nix-shell --pure
BTW, nix 1.x shells will be always treated as pure;
in that version detection isn't possible.
https://github.com/NixOS/nix/commit/1bffd83e1a9c
Some SSL libs don't react to $SSL_CERT_FILE.
That actually makes sense to me, as we add this behavior
as nixpkgs-specific, so it seems "safer" to use $NIX_*.
We want initialPath to have lowest precedence.
In addition, unset _PATH and _HOST_PATH as they shouldn’t be needed
after final PATH and HOST_PATH are set.
Adds pkgsCross.wasm32 and pkgsCross.wasm64. Use it to build Nixpkgs
with a WebAssembly toolchain.
stdenv/cross: use static overlay on isWasm
isWasm doesn’t make sense dynamically linked.
This puts patches in all derivations even if it unspecified by the
derivation. By default it will be an empty list. This simplifies
overrides, as we can now assume that patches is a valid name so that
this works:
self: super: {
mypkg = super.pkg.overrideAttrs (o: {
patches = o.patches ++ [ ./my-very-own.patch ];
});
}
That is, you don’t need to provide a default "or []", make-derivation
provides one for you.
Unfortunately, this is a mass rebuild.
this adds libc++ to the LLVM cross, giving us access to the full
Nixpkgs set. This requires 4 stages of wrapped compilers:
- Clang with no libraries
- Clang with just compiler-rt
- Clang with Libc, and compiler-rt
- Clang with Libc++, Libc, and compiler-rt
New android ndk (18) now uses clang. We were going through the wrapper
that are provided. This lead to surprising errors when building.
Ideally we could use the llvm linker as well, but this leads to errors
as many packages don’t support the llvm linker.
This is needed to avoid confusing and repeated boilerplate for
`fooForTarget`. The vast majority of use-cases can still use
`buildPackages or `targetPackages`, which are now defined in terms of
these.
You can build (partially) with LLVM toolchain using the useLLVM flag.
This works like so:
nix-build -A hello --arg crossSystem '{ system =
"aarch64-unknown-linux-musl"; useLLVM = true }'
also don’t separate debug info in lldClang
It doesn’t work currently with that setup hook. Missing build-id?
Comments on conflicts:
- llvm: d6f401e1 vs. 469ecc70 - docs for 6 and 7 say the default is
to build all targets, so we should be fine
- some pypi hashes: they were equivalent, just base16 vs. base32