We were using 'Combined Image JSON + Filesystem Changeset Format' [1] to
unpack and pack image and this patch switches to the format used by the registry.
We used the 'repository' file which is not generated by Skopeo when it
pulls an image. Moreover, all information of this file are also in the
manifest.json file.
We then use the manifest.json file instead of 'repository' file. Note
also the manifest.json file is required to push an image with Skopeo.
Fix #29636
[1] 749d90e10f/image/spec/v1.1.md (combined-image-json--filesystem-changeset-format)
The database dump doesn't contain sha and size. This leads to invalid
path in the container. We have to fix the database by using
nix-store.
Note a better way to do this is available in Nix 1.12 (since the
database dump contains all required information).
We also add content output paths in the gcroots since they ca be used
by the container.
Currently, the contents closure is copied to the layer but there is no
nix database initialization. If pkgs.nix is added in the contents,
nix-store doesn't work because there is no nix database.
From the contents of the layer, this commit generates and loads the
database in the nix store of the container. This only works if there
is no parent layer that already have a nix store (to support several
nix layers, we would have to merge nix databases of parent layers).
We also add an example to play with the nix store inside the
container. Note it seems `more` is a missing dependency of the nix
package!
1. `crossDrv` is now the default so we don't need to worry about that in
build != host builds.
2. shell is the build time shell, so `wrapCCCross` doesn't need to
worry, as build == host.
3. `shell.shellPath` will always be appended where useful.
4. Complicated `shell == ""` logic served no purpose.
Fetch into $out and remove all version control files to make it
deterministic (.repo and all .git subdirectories - e.g. the .git/index
files change every time).
Additionally I've changed the default of "useArchive" to false because
fetching with "--archive" will fail for some projects (e.g.
"platform/external/iosched" from the AOSP).
Now, this function should hopefully work for every tag of the AOSP.
Before this patch, a VM was used to spawn docker that pulled the
VM. Now, the tool Skopeo does this job well so we can simplify our
dockerTools since we doesn't need Docker anymore:)
This also fixe the regression described in
https://github.com/NixOS/nixpkgs/issues/29271 : cntlm proxy doesn't
work in 17.09 while it worked in 17.03.
Note Skopeo doesn't produce the same output than docker pull so, we
have to update sha.
This was a problem when run inside a sandbox, e.g. via
"fetchRepoProject". The error message from repo seems unrelated:
fatal: Cannot get https://gerrit.googlesource.com/git-repo/clone.bundle
fatal: error no host given
But the exception is actually thrown due to missing certificates
(/etc/ssl/certs). It should be possible to provide another location via
environment variables (e.g. SSL_CERT_FILE, REQUESTS_CA_BUNDLE or
CURL_CA_BUNDLE) but apparently that doesn't actually work for some
reason (would have to study our Python packaging).
Now "fetchRepoProject" works without the "--no-clone-bundle" option.
The verification was failing with the following error:
gpg: keyblock resource '/tmp/nix-build-XYZ.drv-0/.repo/repo/./.repoconfig/gnupg/pubring.kbx': No such file or directory
Using an absolute path for $HOME fixes this.
And since 175ecbab91 the dependencies on
"git" and "gnupg" aren't required anymore as "gitRepo" already covers
them.
This reverts commit 0a944b345e, reversing
changes made to 61733ed6cc.
I dislike these massive stdenv changes with unclear motivation,
especially when they involve gratuitous mass renames like NIX_CC ->
NIX_BINUTILS. The previous such rename (NIX_GCC -> NIX_CC) caused
months of pain, so let's not do that again.
cctool's as needs to be told use to use gnu as, or else we'd need a
dependency cycle between cctools and clang for this case.
In general, this is not a problem because clang uses its own integrated
assembler where possible, and gnu as otherwise.
To wait for the docker deamon, curl requests are sent. However, if a
http proxy is set, it will respond instead of the docker daemon.
To avoid this, we send docker ps command instead of curl command.
This becomes necessary if more wrappers besides cc-wrapper start
supporting hardening flags. Also good to make the warning into an
error.
Also ensure interface is being used right: Not as a string, not just in
bash.