This will begin the process of breaking up the `useLLVM` monolith. That
is good in general, but I hope will be good for NetBSD and Darwin in
particular.
Co-authored-by: sterni <sternenseemann@systemli.org>
The distinction between the inputs doesn't really make sense in the
mkShell context. Technically speaking, we should be using the
nativeBuildInputs most of the time.
So in order to make this function more beginner-friendly, add "packages"
as an attribute, that maps to nativeBuildInputs.
This commit also updates all the uses in nixpkgs.
This PR adds a new aarch64 android toolchain, which leverages the
existing crossSystem infrastructure and LLVM builders to generate a
working toolchain with minimal prebuilt components.
The only thing that is prebuilt is the bionic libc. This is because it
is practically impossible to compile bionic outside of an AOSP tree. I
tried and failed, braver souls may prevail. For now I just grab the
relevant binaries from https://android.googlesource.com/.
I also grab the msm kernel sources from there to generate headers. I've
included a minor patch to the existing kernel-headers derivation in
order to expose an internal function.
Everything else, from binutils up, is using stock code. Many thanks to
@Ericson2314 for his help on this, and for building such a powerful
system in the first place!
One motivation for this is to be able to build a toolchain which will
work on an aarch64 linux machine. To my knowledge, there is no existing
toolchain for an aarch64-linux builder and an aarch64-android target.
Also begin to start work on cross compilation, though that will have to
be finished later.
The patches are based on the first version of
https://reviews.llvm.org/D99484. It's very annoying to do the
back-porting but the review has uncovered nothing super major so I'm
fine sticking with what I've got.
Beyond making the outputs work, I also strove to re-sync the packages,
as they have been drifting pointlessly apart for some time.
----
Other misc notes, highly incomplete
- lvm-config-native and llvm-config are put in `dev` because they are
tools just for build time.
- Clang no longer has an lld dep. That was introduced in
db29857eb3, but if clang needs help
finding lld when it is used we should just pass it flags / put in the
resource dir. Providing it at build time increases critical path
length for no good reason.
----
A note on `nativeCC`:
`stdenv` takes tools from the previous stage, so:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.stdenv.cc`: `(?0, ?1, x)`
while:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.targetPackages`: `(x, x, ?2)`
3. `pkgsBuildBuild.targetPackages.stdenv.cc`: `(?1, x, x)`
In a typical build environment the toolchain will use the value of the
MACOSX_DEPLOYMENT_TARGET environment variable to determine the version
of macOS to support. When cross compiling there are two distinct
toolchains, but they will look at this single environment variable. To
avoid contamination, we always set the equivalent command line flag
which effectively disables the toolchain's internal handling.
Prior to this change, the MACOSX_DEPLOYMENT_TARGET variable was
ignored, and the toolchains always used the Nix platform
definition (`darwinMinVersion`) unless overridden with command line
arguments.
This change restores support for MACOSX_DEPLOYMENT_TARGET, and adds
nix-specific MACOSX_DEPLOYMENT_TARGET_FOR_BUILD and
MACOSX_DEPLOYMENT_TARGET_FOR_TARGET for cross compilation.
Instead of always supplying flags, apply the flags as defaults. Use
clang's native flags instead of lifting the linker flags from binutils
with `-Wl,`.
If a project is using clang to drive linking, make clang do the right
thing with MACOSX_DEPLOYMENT_TARGET. This can be overridden by command
line arguments. This will cause modern clang to pass
`-platform_version 10.12 0.0.0`, since it doesn't know about the SDK
settings. Older versions of clang will pass down `-macos_version_min`
flags with no sdk version.
At the linker layer, apply a default value for anything left
ambiguous. If nothing is specified, pass a full
`-platform_version`. If only `-macos_version_min` is specified, then
lock down the sdk_version explicitly with `-sdk_version`. If a min
version and sdk version is passed, do nothing.
The `docker load` command supports loading tarballs that contain
multiple docker images with their respective image names and tags. This
enables distributing these images as a single file which simplifies the
release of software when an application requires multiple services to
run.
However, pkgs.dockerTools only create tarballs with a single docker
image and there exists is no mechanism in nixpkgs to combine the created
tarballs. This commit implements merging of tarballs in a way that is
compatible with `docker load`.
Since 03eaa48 added perl.withPackages, there is a canonical way to
create a perl interpreter from a list of libraries, for use in script
shebangs or generic build inputs. This method is declarative (what we
are doing is clear), produces short shebangs[1] and needs not to wrap
existing scripts.
Unfortunately there are a few exceptions that I've found:
1. Scripts that are calling perl with the -T switch. This makes perl
ignore PERL5LIB, which is what perl.withPackages is using to inform
the interpreter of the library paths.
2. Perl packages that depends on libraries in their own path. This
is not possible because perl.withPackages works at build time. The
workaround is to add `-I $out/${perl.libPrefix}` to the shebang.
In all other cases I propose to switch to perl.withPackages.
[1]: https://lwn.net/Articles/779997/
Without this fix, I can no longer build anything with releaseTools.nixBuild {}. A job typically fails with:
$ nix-build release.nix -A build.basic.x86_64-linux --show-trace
error: while evaluating the attribute 'lib' of the derivation 'libnixxml-0.1pre1234' at /home/sander/teststuff/nixpkgs/pkgs/build-support/release/nix-build.nix:89:5:
cannot coerce a set to a string, at /home/sander/teststuff/nixpkgs/pkgs/build-support/release/nix-build.nix:89:5
This is caused by the fact that `lib' is propagated as a parameter, which is a function. Functions cannot be converted to strings.
For images running on Kubernetes, there is no guarantee on how duplicate
environment variables in the image config will be handled. This seems
to be different from Docker, where the last environment variable value
is consistently selected.
The current code for `streamLayeredImage` was exploiting that assumption
to easily propagate environment variables from the base image, leaving
duplicates unchecked. It should rather resolve these duplicates to
ensure consistent behavior on Docker and Kubernetes.
It is now possible to pass a `fromImage` to `buildLayeredImage` and
`streamLayeredImage`, similar to what `buildImage` currently supports.
This will prepend the layers of the given base image to the resulting
image, while ensuring that at most `maxLayers` are used. It will also
ensure that environment variables from the base image are propagated
to the final image.
The check for including the C++ standard library headers was nested inside the
check for linking with the C++ standard library. As a result, the `-nostdlib`
flag incorrectly implied `-nostdinc++`, which made it virtually impossible to
partially link C++ objects.
runCommandWith receives an attribute set with options which previously
were positional arguments of runCommand' and a buildCommand. This
allows for overriding the used stdenv freely (so stuff like
llvmPackages.stdenv can be used). Additionally the possibility to change
arguments passed to stdenv.mkDerivation is made more explicit via the
derivationArgs argument.
Previously it was awkward to use the runCommand-variants with
passAsFile as a double definition of passAsFile would potentially
break runCommand: passAsFile would overwrite the previous definition,
defeating the purpose of setting it in runCommand in the first place.
This is now fixed by concatenating the [ "buildCommand" ] list with
one the one from env, if present.
Adjust buildEnv where passAsFile = null; was passed in some cases,
breaking evaluation since it'd evaluate to [ "buildCommand" ] ++ null.
Commit df4761 added a call to readlink, which fails if it is not in the
user's path when run. Updated the readlink call to pull from the
coreutils store path directly.
When using `buildLayeredImage`, it is not possible to specify an image
name of the form `<registry>/my/image`, although it is a valid name.
This is due to derivations under `buildLayeredImage` using that image
name as their derivation name, but slashes are not permitted in that
context.
A while ago, #13099 fixed that exact same problem in `buildImage` by
using `baseNameOf name` in derivation names instead of `name`. This
change does the same thing for `buildLayeredImage`.
`stream_layered_image.py` currently assumes that the store root will be
at `/nix/store`, although the user might have configured this
differently. This makes `buildLayeredImage` unusable with stores having
a different root, as they will fail an assertion in the python script.
This change updates that assertion to use `builtins.storeDir` as the
source of truth about where the store lives, instead of assuming
`/nix/store`.
- This is the first packages which uses Dune in order to build and install
so I had to refactor build-support/coq/default.nix in order to support it.
- I added a new feature: one can now release.v.sha256 empty to try to download
with a fake sha256, hence failures are reported and one can copy paste the
sha256 given by the error message.
- I updated the documentation of languages-frameworks/coq.section.md accordingly.
Fixes build failures with clang:
clang-7: error: unknown argument: '-fPIC -target'
clang-7: error: no such file or directory: '@<(printf %qn -O2'
clang-7: error: no such file or directory: 'x86_64-apple-darwin'
Introduced by 60c5cf9cea in #112449
Since #112276, we should always put `makeWrapper` in
`nativeBuildInputs`. But `buildEnv` was saying put it in `buildInputs`.
That's wrong!
Fix the instructions, and make the right thing possible.
The `checkType` argument of buildRustPackage was not used anymore
since the refactoring of `buildRustPackage` into hooks. This was
an oversight that is fixed by this change.
The check type can also be passed directly to cargoCheckHook using the
`cargoCheckType` environment variable.
This change makes the wrapper script avoid displaying echo area messages
during startup. This helps prevent split second UI glitches early in the
startup process. The messages itself will still be logged and therefore
will not hamper inspection for debugging purposes.
Preserve top-level symlinks such as /lib -> /usr/lib.
This allows nested containers such as Steam's new runtime to remount
/usr if they need to and then run unmodified binaries that reference
e.g. /lib/ld-linux-x86-64.so.2
Before, we would mount the fully resolved host directory at /lib and
thus the dynamic loader would always be the one from the host filesystem.
The reason for this change is simply to avoid the following messages
that are unnecessary and can be confusing (and these messages will be
repeated for each submodule):
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
With this change the behaviour remains unchanged (apart from the
suppressed "warning" in the console output of course) and therefore this
doesn't cause any hashes to change and by default nix-prefetch-git uses
the "fetchgit" branch anyway (branchName can be set to override the
default):
Switched to a new branch 'fetchgit'
For that reason the initial branch name doesn't matter anyway and since
we're not relying on / hardcoding "master" we could simply switch to
"main" (which seems most common nowadays). See [0] for more details on
why this wouldn't break anything.
However, since the initial branch name doesn't matter and to avoid any
additional risks it was "decided" to keep using "master" (s. #113313).
[0]: https://github.com/NixOS/nixpkgs/pull/113313#issuecomment-780589516
API change:
`cargoParallelTestThreads` suggests that this attribute sets the
number of threads used during tests, while it is actually a boolean
option (use 1 thread or NIX_BUILD_CORES threads). In the hook, this
is replaced by a more canonical name `dontUseCargoParallelTests`.
The directory in the tarball of vendored dependencies contains `name`,
which is by default set to `${pname}-${version}`. This adds an
additional attribute to permit setting the name to something of the
user's choosing.
Since `cargoSha256`/`cargoHash` depend on the name of the directory of
vendored dependencies, `cargoDepsName` can be used to e.g. make the
hash invariant to the package version by setting `cargoDepsName =
pname`.
The previous commit stopped systemd from looking for system units in
/etc/systemd-mutable/system, which was a Dysnomia-specific path.
While this script doesn't seem to be used anywhere inside nixpkgs (also
not in the gone-since #110799 Dysnomia), its fallback mode (when
/etc/systemd/system is read-only) did write units to that
Dysnomia-specific path, which systemd now doesn't look at anymore.
It might be up for another debate on whether systems with read-only
/etc/systemd/system should probably just use /run/systemd/system, and
not some NixOS-specific paths, as such conditions can happen on other
distros too, but let's pick the other NixOS-specific path
/nix/var/nix/profiles/default/lib/systemd/system for now, which is
probably better than a path that surely is never looked at.
- API change: remove the `target` argument of `buildRustPackage`, the
target should always be in sync with the C/C++ compiler that is used.
- Gathering of binaries has moved from `buildPhase` to `installPhase`,
this simplifies the hook and orders this functionality logically
with the installation logic.
This caused shebangs that were already store paths to be rewritten.
Introduced by ab4c359822 in #94642
Example difference:
$ echo "hello world" | tail -c+3
llo world
$ str="hello world"; echo ${str:3}
lo world
/usr/bin/env seems to be no longer be present in the sandbox. This means
that fetchcvs would fail with a “not found error” whenever CVS_RSH was
necessary.
We fix this by simply setting the current $SHELL as shebang.
Alternatively also setting it to /bin/sh statically would be possible.
4a5c49363a added some more commands after
`extraPostFetch` but concatenated them without a separating newline.
Which means, that since that commit
fetchzip { ..., extraPostFetch = ''rm -f "$out"/some-file''; }
now actually runs the following shell command
rm -f "$out"/some-file"chmod -R a-w "$out"
thus deleting "$out". Which is very unfortunate.
Especially since this actually happens on master for all `fetchFromBitbucket`
derivations. But since the results are fixed-output users bulding with hydra
cache enabled are not hitting this for not recently updated derivations yet.
The Everthing module is not part of a library and should therefore
not be copied to the nix store.
This is particularly bad, if the Everything module is defined in
an agda library included directory, e.g. consider an agda-lib with
include: .
and Everything.agda in the project root (.), in which case the
Everything module would become part of the library.
If multiple such projects are in the dependency tree, the Everything
module becomes ambiguous and the build would fail.
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
continuation of #109595
pkgconfig was aliased in 2018, however, it remained in
all-packages.nix due to its wide usage. This cleans
up the remaining references to pkgs.pkgsconfig and
moves the entry to aliases.nix.
python3Packages.pkgconfig remained unchanged because
it's the canonical name of the upstream package
on pypi.
Since fdf32154fc, we no longer allow
missing modules in the initrd. Unfortunately since before this commit,
the modules-closure script would also fail on missing firmware, which
is a very common case (e.g. xhci-pci.ko.xz lists renesas_usb_fw.mem as
dependent firmware). Fix this by only issuing a warning instead.
This was already fixed on non-Darwin, but the fix missed that it was
also reintroduced for the Darwin code path at the same time.
Fixes: dd5d2482c9 ("emacs: Fix accidental double wrapping")
Warning about future breaking changes is wrong.
- It suggests that the maintainers don't value backwards compatibility.
They do.
- It implies that other parts of Nixpkgs won't ever break. They will.
- It implies that a well-defined "public" interface exists. It doesn't.
- If the reasons above didn't apply, it should have been in the manual
instead.
Breaking changes will come, especially to the interface. That can be the
only way we can make progress without breaking the image _contents_.
I don't think dockerTools is any different from most of Nixpkgs in
these regards.
`buildRustPackage` currently accepts `cargoSha256` as a hash for
vendored dependencies. This change adds `cargoHash` which accepts SRI
hashes, setting `outputHashAlgo` to `null`.
The hash mismatch message still uses `cargoSha256` as an example,
which it probably should until we completely switch to SRI hashes.
By default, Perl versions since 5.8.1 use randomization to make hashes
resistant to complexity attacks.
That randomization makes building VM images such as ubuntu1804x86_64
non-deterministic because the (imported) derivations built by
deb/deb-closure.pl are not stable.
This can easily be observed by repeating the following sequence of
commands and noting the path of the image's .drv:
nix-instantiate -E '(import <nixpkgs> {}).vmTools.diskImageFuns.ubuntu1804x86_64 {}'
nix-store --delete /nix/store/*ubuntu-18.04-bionic-amd64.nix
One source of non-determinism is the handling of Provides/Replaces,
which depends on the order of iteration over %packages. Here is a
diff showing the corresponding change in output:
>>> awk
-virtual awk: using original-awk
- original-awk: libc6 (>= 2.14)
+virtual awk: using mawk
+ mawk: libc6 (>= 2.14)
- mawk: libc6 (>= 2.14)
->>> libc6
This patch sorts packages by name for Provides/Replaces processing,
which seems to result in stable output.
(If the above turns out not to be sufficient, one could also set the
PERL_HASH_SEED and PERL_PERTURB_KEYS environment variables, documented
in 'perlrun', to disable Perl's built-in randomization. Complexity
attacks are not an issue as we control and trust all inputs.)
Using the full store hash as the random seed occasionally caused
reference cycles when the invocation was stored in output artifacts.
For example, cross-compiled gcc was failing due to this:
https://hydra.nixos.org/eval/1631713#tabs-now-fail
Simply truncating the hash is sufficient to avoid this.
When invoking a simple Ada program with `gcc` from `gnats10`, the
following warnings are shown:
```
$ gcc -c conftest.adb
gnat1: warning: command-line option ‘-Wformat=1’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: command-line option ‘-Wformat-security’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: ‘-Werror=’ argument ‘-Werror=format-security’ is not valid for Ada
$ echo $?
0
```
This is only spammy when compiling Ada programs inside a Nix derivation,
but certain configure scripts (such as the ./configure script from the
gcc that's built by coreboot's `make crossgcc` command) fail entirely
when getting that warning output.
https://nixos.wiki/wiki/Coreboot currently suggests manually running
> NIX_HARDENING_ENABLE="${NIX_HARDENING_ENABLE/ format/}" make crossgcc
… but actually teaching the nixpkgs-provided cc wrapper that `format`
isn't supported as a hardening flag seems to be the more canonical way
to do this in nixpgks.
After this, Ada programs still compile:
```
$ gcc -c conftest.adb
$ echo $?
0
```
And the compiler output is empty.
As @lopsided98 points out in #105305, since the hashes are now target
sensative, and until we find reason to actually care to test what they
are exactly, we are best just normalizing them away in the tests.
- Generate a link to the initramfs file with an appropriate file
extension, guessed based on the compressor by default
- Use correct metadata in u-boot images if generated, up to now this
was hardcoded to gzip and would silently generate an erroneous image
if another compressor was specified
- Document all the parameters
- Improve cross-building compatibility, by allowing passing either a
string as before, or a function taking a package set and returning the
path to a compressor in the "compressor" argument of the
function.
- Support more compression algorithms
- Place compressor executable function and arguments in passthru, for
reuse when appending initramfses
Co-Authored-By: Dominik Xaver Hörl <hoe.dom@gmx.de>
Originally this was meant to support other Windows versions than just
Windows XP, but before I actually got a chance to implement this I left
the project that I implemented this for.
The code has been broken for years now and I highly doubt anyone is
interested in resurrecting this (including me), so in order to make this
less of a maintenance burden for everybody, let's remove it.
Signed-off-by: aszlig <aszlig@nix.build>
Docker (via containerd) and the the OCI Image Configuration imply and
suggest, respectfully, that the architecture set in images matches those
of GOARCH in the Go Language document.
This changeset updates the implimentation of getArch in dockerTools to
return GOARCH values, to satisfy Docker.
Fixes: #106695
If I'm running an Emacs executable from emacsWithPackages as my main
programming environment, and I'm hacking on Emacs, or the Emacs
packaging in Nixpkgs, or whatever, I don't want the Emacs packages
from the wrapper to show up in the load path of that child Emacs. It
results in differing behaviour depending on whether the child Emacs is
run from Emacs or from, for example, an external terminal emulator,
which is very surprising.
To avoid this, pass another environment variable containing the
wrapper site-lisp path, and use that value to remove the corresponding
entry in EMACSLOADPATH, so it won't be propagated to child Emacsen.
An empty entry in EMACSLOADPATH gets filled with the default value.
This is presumably why the wrapper inserted a colon after the entry it
added for the dependencies. But this naive approach wasn't always
correct.
For example, if the user ran emacs with EMACSLOADPATH=foo, the wrapper
would insert the default value (by adding the trailing `:') even
though the user was trying to expressly opt out of it.
To do this correctly, here I've replaced makeWrapper with a bespoke
script that will actually parse the EMACSLOADPATH provided in the
environment (if given), and insert the wrapper's load path just before
the default value. If EMACSLOADPATH is given but contains no default
value, we respect that and don't add the wrapped dependencies at all.
If no EMACSLOADPATH is given, we insert the wrapped dependencies
before the default value, just like before. In this way, the wrapped
Emacs should now behave as if the wrapped dependencies were part of
Emacs's default load-path value.
Previously, meta wasn't being passed through at all, because it's
removed from args without being used anywhere. This made it so that
rcirc-menu wasn't being marked as broken even though it was supposed
to be.
This patch copies the meta handling from melpaBuild, including the
default home page (adapted for ELPA).
Use "find -exec" to strip rather than "find … | xargs …". The former
ensures that stripping is attempted for each file, whereas the latter
will stop stripping at the first failure. Unstripped files can fool
runtime dependency detection and bloat closure sizes.
There are a few operations in this library that naively runs on every
iteration while they could be cached.
For a simple test repository with a small number of files and ~1000
gitignore patterns this brings memory usage down from ~233M to ~157M
and wall time from 2.6s down to 0.78s.
This should scale similarly with the number of files in a repository.
For example graphviz has chained symlinked manpages: dot2gxl.1 is
a symlink to gv2gxl.1 which is a symlink to gxl2gv.1
The second loop replaces each non-compressed symlink to a compressed
symlink. The target is determined with 'readlink -f', which follows
links recursively until the first name that is not a link (so either
the 'target name' or the first 'dangling' symlink).
This means that if the loop converted dot2gxl.1 before converting
gv2gxl.1 it would add a symlink `dot2gxl.1.gz->gxl2gv.1.gz`. When
it converted gv2gxl.1 first, it would then add a
`dot2gxl.1.gz->gv2gxl.1.gz` symlink.
Both are 'correct', but it's weird the result depends on the order
in which 'find' returns the files. This PR makes the behaviour
deterministic.
fixes#104708
This is a workaround for NixOS/nix#4295, which caused single-user Linux
Nix installations using sandboxed builds to start failing to build
fetchzip derivations after 4a5c49363a.
In short: removing write permissions for the entire directory is great,
except we then can't rename(2) it to the final Nix store path out of the
sandbox, because we don't have write permission on the directory and
thus cannot update the ".." directory entry.
Bofore this change, NUM_JOBS was set to 1. Some crates for building
C/C++ code (e.g. the cc and cmake crates), rely on this variable to
set the number of jobs. As a consequence, we were compiling embedded
libraries serially. Change this to NIX_BUILD_CORES to permit parallel
builds.
Prior discussion:
https://github.com/NixOS/nixpkgs/pull/50452#issuecomment-439407547
This provides a /etc/passwd and /etc/group that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
The current nginx example used the `runAsRoot` parameter to setup
/etc/group and /etc/passwd (which also doesn't exist in
buildLayeredImage), so we can now just use fakeNss there and use
buildLayeredImage.