In a typical build environment the toolchain will use the value of the
MACOSX_DEPLOYMENT_TARGET environment variable to determine the version
of macOS to support. When cross compiling there are two distinct
toolchains, but they will look at this single environment variable. To
avoid contamination, we always set the equivalent command line flag
which effectively disables the toolchain's internal handling.
Prior to this change, the MACOSX_DEPLOYMENT_TARGET variable was
ignored, and the toolchains always used the Nix platform
definition (`darwinMinVersion`) unless overridden with command line
arguments.
This change restores support for MACOSX_DEPLOYMENT_TARGET, and adds
nix-specific MACOSX_DEPLOYMENT_TARGET_FOR_BUILD and
MACOSX_DEPLOYMENT_TARGET_FOR_TARGET for cross compilation.
Instead of always supplying flags, apply the flags as defaults. Use
clang's native flags instead of lifting the linker flags from binutils
with `-Wl,`.
If a project is using clang to drive linking, make clang do the right
thing with MACOSX_DEPLOYMENT_TARGET. This can be overridden by command
line arguments. This will cause modern clang to pass
`-platform_version 10.12 0.0.0`, since it doesn't know about the SDK
settings. Older versions of clang will pass down `-macos_version_min`
flags with no sdk version.
At the linker layer, apply a default value for anything left
ambiguous. If nothing is specified, pass a full
`-platform_version`. If only `-macos_version_min` is specified, then
lock down the sdk_version explicitly with `-sdk_version`. If a min
version and sdk version is passed, do nothing.
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
The new skip for the dynamic linker introduced in
ccfd26ef14 ("bintools-wrapper: skip dynamic linker for static
binaries") includes a change in behaviour, previously the $link32
variable was checked using an arithmetic expression via (( )). This
returns zero if the output of the arithmetic expression is nonzero, i.e.
link32=1 would return zero and the if condition would be executed.
The code refactored this to use "$link32" = "0" which is incorrect in
this case, since we want this if conditional to run if $link32 is 1.
Small shell excerpt for clarity:
$ export link32=1
$ export cookie=1
$ if [[ "$cookie" = "1" ]] && (( "$link32" )); then echo "do some stuff"; fi;
do some stuff
$ if [[ "$cookie" = "1" && "$link32" = "0" ]]; then echo "do some stuff"; fi;
$
Change it to check for $link32 = "1", as the previous code did.
Currently we set dynamic-linker unconditionally. This breaks
however some static binaries i.e. rust binaries linked against musl.
There is no reason we should set an elf interpreter for static binaries
hence this is skipped if `-static` or `-static-pie` is either passed to
our cc or ld wrapper.
The dynamic loader on powerpc64 is called ld64.so.2 rather than
ld-linux.so.*, and was not matched by the existing pattern.
We reuse the dynamicLinker name from binutils to match a wider set
of platforms and to avoid specifying this information in two places.
Revert the hack and the original faulty commit.
This reverts commit 48264ee506.
Revert "Purity checking should accept $TMP and not just /tmp"
This reverts commit fb777be7d2.
I hate the thing too even though I made it, and rather just get rid of
it. But we can't do that yet. In the meantime, this brings us more
inline with autoconf and will make it slightly easier for me to write a
pkg-config wrapper, which we need.
We can’t set this for cross-compiling since we use the GNU linker.
Instead, set these flags only when targetPlatform is macOS.
Fixes #80754
Fixes #83141
Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.