Without this, the generated pack files are non-deterministic.
I didn't notice this issue in my earlier testing, because my test repo
had too few commits for the thread scheduling to take effect. (Test repo
had about 10 commits.)
Add more files to the delete list:
* .git/FETCH_HEAD
* .git/ORIG_HEAD
* .git/refs/remotes/origin/HEAD
* .git/config
Further, remove all remote branches, remove tags not reachable from the
given 'rev', do a full repack and then garbage collect unreferenced
objects.
According to my testing, the result is fully deterministic. As in "any
change done to the upstream repo, ahead of 'rev', will not affect the
hash of the resulting 'clone'". Even changing the clone URL will not
change the output hash, because .git/config is removed.
A new version of git can of course change store format, but that's
unavoidable.
For big repositories, the repack operation may be a bit heavy. But as
far as I can see there is no cheaper way to determinism.
Hydra generates a GHC closure for Darwin that for no apparent reason
contains an ancient, broken Haddock binary -- probably because of an
impurity in the build system. That bug makes those GHC binaries
unusable: <https://github.com/NixOS/nixpkgs/issues/2689>.
This likely exacerbates the non-determinism in ghc package ids, so until
that is fixed let's live with the slow builds.
This reverts commit 817c0e4144.
This should fix the OpenJDK build, which was failing because paxctl is
in sbin and therefore not automatically added to $PATH.
http://hydra.nixos.org/build/15658346
This patch makes two changes.
(1) It memoizes the computation of dependsOnOld.
(2) It replaces rewrittenDerivations with a similar memoized table rewriteMemo.
This prevents the entire tree of run-time dependencies from being traversed and instead only traverses the graph of run-time dependencies.
In the case of deep dependency changes (such as changing one's bash version for an entire NixOS system) this can lead to an exponential speedup in processing time
because shared dependencies are no longer traversed multiple times.
This patch isn't quite derivation-per-derivation equivalent to the original computation.
There are two immaterial differences.
(1) The previous version would always call upon sed to replace oldDependency with newDependency even when the store object being updated doesn't directly depend on
oldDependency.
The new version only replaceds oldDependency with newDependency when the store object being updated actually directly depends on oldDependency (which means there is
actually a hash to replace).
(2) The previous version would list the old store object as a source input of the new store object, *except* for the root derivation being updated. Because the
root derivation being updated has its actual derivation avaiable the previous verions would make the updated root derivation depend on the old derivation as a
derivation input instead of a source input.
The new version always lists the old store object as a source input, including the root derivation.
We still need this because some clang-based packages depend on
it. (The sysroot filtering was originally done by clang-wrapper's
ld-wrapper, but we merged the ld-wrappers in
a4f9b9c8b5ec9ef106671ffdf93e0059835d0ec1.)
http://hydra.nixos.org/build/13906922
Fixes a regression on OS X introduced by f83af95.
Don't use --tmpdir for mktemp, because that flag doesn't exist on OS X.
However, using -t is deprecated in GNU coreutils, so as suggested by
@ip1981 we're now using parameter expansion on ${TMPDIR:-/tmp} to
provide /tmp as a fallback if TMPDIR is not set and use it instead.
Also use this approach for nix-prefetch-cvs now in order to stay
consistent.
Reported-by: Vladimir Kirillov <proger@wilab.org.ua>
Tested-by: Igor Pashev <pashev.igor@gmail.com>
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Instead of relying on $$ to not collide with an existing path.
Quoting the Bash manual about $$:
> Expands to the process ID of the shell. In a () subshell, it expands
> to the process ID of the current shell, not the subshell.
So, this is different from $BASHPID:
> Expands to the process ID of the current bash process. This differs
> from $$ under certain circumstances, such as subshells that do not
> require bash to be re-initialized.
But even $BASHPID is prone to race conditions if the process IDs wrap
around, so to be on the safe side, we're using mktemp here.
Closes #3784.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
For now, we don't NATIVE_SYSTEM_HEADER_DIR because it breaks the
build. However, it points to Glibc in the Nix store (not /usr/include)
so it's kind of okay.
Now gcc is just another build input, making it possible in the future
to have a stdenv that doesn't depend on a C compiler. This is very
useful on NixOS, since it would allow trivial builders like
writeTextFile to work without pulling in the C compiler.
If $src refers to a directory, then always copy it. Previously, we
checked the extension first, so if the directory had an extension like
.tar, unpackPhase would fail.
There was a few files containing timestamp, so we now remove them.
It shouldn't be a problem for logs. However, index might be. Anyway,
that's better than nothing.
Having a separate clang-wrapper is really unfortunate because it
means that we'll forever forget to apply changes to both (e.g.
commit 289895fe2c). This commit
gets rid of the redundant copies of ld-wrapper.sh and utils.sh.
Somewhere the no-sys-dirs.patch got disabled, so gcc was looking in
/usr/local/include and /usr/lib. Since I can't fix the patch easily,
I've borrowed the --sysroot trick from clang-wrapper. This causes
builtin paths to be prefixed with /var/empty
(e.g. /var/empty/usr/lib), which don't exist.
This updates the new stable kernel to 3.14, and the new testing kernel
to 3.15.
This also removes the vserver kernel, since it's probably not nearly as
used.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
If the user explictly gives a ref such as "refs/heads/master", `git
rev-parse` failed because we only checked out the `fetchgit`
branch. Now, we also try `git rev-parse fetchgit` if the first call
fails, which fixes the issue.
HipChat (or rather its copy of Qt) expects to find keyboard data in
/usr/share/X11/xkb. So use a LD_PRELOAD library to intercept and
rewrite the Glibc calls that access those paths. We've been doing the
same thing with packages like Spotify, but now this functionality has
been abstracted into a reusable library, libredirect.so. It uses an
environment variable $NIX_REDIRECTS containing a colon-separated list
of path prefixes to be rewritten, e.g. "/foo=bar:/xyzzy=/fnord".
This now provides a handful of different grsecurity kernels for slightly
different 'flavors' of packages. This doesn't change the grsecurity
module to use them just yet, however.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
fetchpatch is fetchurl that determinizes the patch.
Some parts of generated patches change from time to time, e.g. see #1983 and
http://comments.gmane.org/gmane.linux.distributions.nixos/12815
Using fetchpatch should prevent the hash from changing.
Conflicts (auto-solved):
pkgs/development/libraries/haskell/gitit/default.nix
1) Packages formerly called haskell-haskell-platform-ghcXYZ-VVVV.X.Y.Z are
now called haskell-platform-VVVV.X.Y.Z. The latest version can be
installed by running "nix-env -i haskell-platform".
2) The attributes haskellPackages_ghcXYZ.haskellPlatform no longer exist.
Instead, we have attributes like haskellPlatformPackages."2012_4_0_0".
(The last numeric bit must be quoted when used in a Nix file, but not on
the command line to nix-env, nix-build, etc.) The latest Platform has a
top-level alias called simply haskellPlatform.
3) The haskellPackages_ghcXYZ package sets offer the latest version of every
library that GHC x.y.z can compile. For example, if 2.7 is the latest
version of QuickCheck and if GHC 7.0.4 can compile that version, then
haskellPackages_ghc704.QuickCheck refers to version 2.7.
4) All intermediate GHC releases were dropped from all-packages.nix to
simplify our configuration. What remains is a haskellPackages_ghcXYZ set
for the latest version of every major release branch, i.e. GHC 6.10.4,
6.12.3, 7.0.4, 7.2.2, 7.4.2, 7.6.3, 7.8.2, and 7.9.x (HEAD snapshot).
5) The ghcXYZPrefs functions in haskell-defaults.nix now inherit overrides
from newer to older compilers, i.e. an override configured for GHC 7.0.4
will automatically apply to GHC 6.12.3 and 6.10.4, too. This change has
reduced the redundancy in those configuration functions. The downside is
that overriding an attribute for only one particular GHC version has become
more difficult. In practice, this case doesn't occur much, though.
6) The 'cabal' builder has a brand-new argument called 'extension'. That
function is "self : super : {}" by default and users can override it to
mess with the attribute set passed to cabal.mkDerivation. An example use
would be the definition of darcs in all-packages.nix:
| darcs = haskellPackages.darcs.override {
| cabal = haskellPackages.cabal.override {
| extension = self : super : {
| isLibrary = false;
| configureFlags = "-f-library " + super.configureFlags or "";
| };
| };
| };
In this case, extension disables building the library part of the package
to give us an executable-only version that has no dependencies on GHC or
any other Haskell packages.
The 'self' argument refers to the final version of the attribute set and
'super' refers to the original attribute set.
Note that ...
- Haskell Platform packages always provide the Haddock binary that came with
the compiler.
- Haskell Platform 2009.2.0.2 is broken because of build failures in cgi and
cabal-install.
- Haskell Platform 2010.1.0.0 is broken becasue of build failures in cgi.
This function downloads and unpacks a file in one fixed-output
derivation. This is primarily useful for dynamically generated zip
files, such as GitHub's /archive URLs, where the unpacked content of
the zip file doesn't change, but the zip file itself may (e.g. due to
minor changes in the compression algorithm, or changes in timestamps).
Fetchzip is implemented by extending fetchurl with a "postFetch" hook
that is executed after the file has been downloaded. This hook can
thus perform arbitrary checks or transformations on the downloaded
file.
This allows fonts to be installed from anywhere in an unzipped file
rather than having to cd deep into the directory and come back out in
order for e.g. `forceCopy` to work correctly.
This ensures that the intermediate machine is shut down only after the
migration has finished writing the memory dump to disk, to ensure we
don't end up with empty state files depending on how fast the migration
finished before we actually shut down the VM.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This ensures that the builder isn't waiting forever if the Windows VM
drops dead while we're waiting for the controller VM to signal that a
particular command has been executed on the Windows VM. It won't ever
happen in such cases so it doesn't make sense to wait for the timeout.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This hook allows to scatter files in $out to multiple outputs.
For "bin" and "doc" outputs there are prefefined default masks, but
they can be overriden by setting files_<outname>, for example:
files_bin = [ "/bin/*" "/lib/libexec/" ];
To make an effect hook must be specified in buildInputs.
These two expressions greatly simplify using the clang-analyzer or
Coverity static analyzer on your C/C++ projects. In fact, they are
identical to nixBuild in every way out of the box, and should 'Just
Work' providing your code can be compiled with Clang already.
The trick is that when running 'make', we actually just alias it to the
appropriate scan build tool, and add a post-build hook that will bundle
up the results appropriately and unalias it.
For Clang, we put the results in $out/analysis and add an 'analysis'
report to $out/nix-support/hydra-build-products pointing to the result
HTML - this means that if the analyzer finds any bugs, the HTML results
will automatically show up Hydra for easy viewing.
For Coverity, it's slightly different. Instead we run the build tool and
after we're done, we tar up the results in a format that Coverity Scan's
service understands. We put the tarball in $out/tarballs under the name
'foo-cov-int.xz' and add an entry for the file to hydra-build-products
as well for easy viewing.
Of course for Coverity you must then upload the build. A Hydra plugin to
do this is on the way, and it will automatically pick up the
cov-int.tar.xz for uploading.
Note that coverityAnalysis requires allowUnfree = true;, as well as the
cov-build tools, which you can download from https://scan.coverity.com -
they're not linked to your account or anything, it's just an annoying
registration wall.
Note this is a first draft. In particular, scan-build fixes the C/C++
compiler to be Clang, and it's perfectly reasonable to want to use Clang
for the analyzer but have scan-build invoke GCC instead.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This reverts commit a2a398fbda. The
issue *does* still exist in GHC 7.8.2. Compiled binaries have no -rpath
into their own install directory ("$out") and thus cannot find their own
shared libraries. To work around this issue, we pass an explicit -rpath
argument at configure time. We do that only on Linux, though, because
-rpath is known to cause trouble on Darwin, which was the reason I
originally reverted that patch.
This includes a lot of fixes for cross-building to Windows and Mac OS X
and could possibly fix things even for non-cross-builds, like for
example OpenSSL on Windows.
The main reason for merging this in 14.04 already is that we already
have runInWindowsVM in master and it doesn't work until we actually
cross-build Cygwin's setup binary as the upstream version is a fast
moving target which gets _overwritten_ on every new release.
Conflicts:
pkgs/top-level/all-packages.nix
See the comments at f67015cae4
for more information.
Please note: this makes initrd unrepeatable again, but most people will prefer that above an unbootable system.
The gcc-wrapper doesn't wrap 'cpp'. This breaks some software (such as
Buildroot) because the 'cpp' they get come from the non-wrapped gcc
package which doesn't know about any standard include paths.
gcc-cross-wrapper is untested.
Both branches have quite a lot in common, so it's time for a merge and
do the cleanups with respect to both implementations and also generalize
both implementations as much as possible.
This also closes #1876.
Conflicts:
pkgs/development/interpreters/lua-5/5.2.nix
pkgs/development/libraries/SDL/default.nix
pkgs/development/libraries/glew/default.nix
pkgs/top-level/all-packages.nix
This allows to pass a new attribute osxMinVersion to crossSystem, which
specifies the minimum Mac OS X version you want to be compatible to.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
So far, we determined this based on stdenv.is64bit, but there are cases
where you want to run/build a 32bit program on a 64 bit Windows.
This is now possible, by passing windowsImage.arch = "i686" | "x86_64"
to runInWindowsVM. Based an what was passed, the corresponding Cygwin
packages and setup.exe are bootstrapped.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Another very annoying part. Unfortunately, the only option we might have
here is to include it in nixpkgs or maybe make a fixed Hash on the
result of the closure fetcher.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
As the official Cygwin setup binary download doesn't come in snapshots
or even versioned, the fetchurl of setup.exe will frequently fail, which
in turn will annoy us as hell (or at least me).
One warning though: The fetchurl is currently broken and the cross-build
might not work yet for example on mingw32 (mingw-w64 branch on its way),
but the upstream URL has already changed and the new version contains a
bug (not yet tracked down) which breaks our Windows bootstrap process.
So to conclude: If it's already broken, make it at least "less broken".
"Not broken" is coming soon with the merge of the mingw-w64 branch.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Coincidentally, even with this typo, most tests work anyway, so I didn't
notice it in the first place.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is because autoconf is passing -print-prog-name=ld to the
cross-gcc, which in turn assumes a FHS compliant filesystem hierarchy
and searches ../../../../$crossConfig/bin/ld for the correct ld.
Of course, this won't work on Nix, hence we're explicitly passing the
correct LD program name.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Also update 64bit setup.ini and check whether we have a 64 bit stdenv in
order to choose the proper Cygwin version. Otherwise we now have the
setup.ini for 32bit available as well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
So far, the VMs have always been using the native architecture, because
it was reimporting <nixpkgs> several times. Now, we propagate a list of
packages down to all sub-imports, which not only makes clearer which
dependencies a part actually has, but also will make it easier in case
we want to refactor those parts to use callPackage.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This now isolates the vmTools integration from the bootstrap process and
thus removes our fixed Windows ISO and product key. The latter can now
be provided by an attribute "windowsImage" to runInWindowsVM.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is the last item that was missing to get a fully working
runInWindowsVM function. Apart from checking exit codes, we also now
have preVM/postVM hooks which we can use to write arbitrary constructs
around this architecture, without the need to worry about specific
details.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This function is quite similar to runInLinuxVM, but also ensures that
the builder is run decoupled of the Nix store and using the userland
inside the VM.
We're now picking up the environment variables saved in the previous
commit.
The reason we suppress all errors from the source operation is that it
would emit a ton of errors because we're trying to set read-only
variables.
Also, detecting whether the origBuilder is using the default builder
from the stdenv is currently a bit of a workaround until we have a
specialized pseudo-cross-stdenv someday in the future[TM].
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Later, when we start the actual builder, we're going to restore those
environment variables. We're using "(set; declare -p)", here, because
the former is just printing _all_ environment variables, even those not
supported, and the latter only lists specifically declared variables,
which also encludes exports.
The "declare -p" command also emits those variables in a format similar
to the "export" command.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is mainly to make it easier to quickly change mappings, without
making room for errors such as typos.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cygwin initializes mounts on _every_ login via SSH and doesn't keep them
consistently like on Unix systems, that's why we need to also add fstab
entries for the bind mounts to the store and xchg shares.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now map all guest accounts to the root user, because in the end the
permissions of the current user boil down to the build user of the Nix
builder of the host. That way it's not possible to gain more permissions
at all and just makes the VM communication a lot easier.
However, setting "writable" to yes instead of "read only" to no doesn't
change anything here, I just found it to be clearer.
Also, we now no longer need to have a "nobody" user.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is done by putting the non-initrd /nix/store into a subdirectory,
which we then chroot to and pass along the SSH command.
Also, we now collect the exit code after the chroot command and power
off the VM thereafter, because the store is no longer shadowed and we
have still access to the busybox inside the initrd.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This should trim down possible dependencies on the base installation and
hereby reduce the need for reinstallation of the damn VM to only changes
that affect the Windows installation and the base Cygwin + OpenSSH
setup.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This now finally introduces our xchg share and also uses it for
exchanging state while suspending a VM. However, accessing the _real_
Nix store still isn't possible because we're shadowing the directory in
the initrd.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now we're doing this at the point where we're saving the VM state.
Unfortunately it's not quite right, because the controller VM is shut
down _before_ we're saving the state, so the share gets disconnected
despite autodisconnect being deactivated during setup.
We can get around this issue by finally introducing the xchg share,
which is the last item to be implemented before we can merge to master.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Security-wise it's not a big issue because we're still sandboxed, but I
really don't want to write something like \\\\\\\\192.168.0.2\\\\share
in order to set up network shares.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're going to do this during the suspendedVM phase, so we're able to
more easily change the shares without reinstalling the whole VM.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This could possibly cause flapping whenever qemu is too fast in starting
up. As we are running with the shell's -e flag, the socat check also
ensures that the VDE switch is properly started and causes the whole
build to fail, should it not start up within 20 seconds.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
These stages are in particular:
* Install of the bare Windows VM with Cygwin and shut down.
* Boot up the same VM again without the installation media and dump the
VMs memory to state.gz.
* Resume from state.gz and build whatever we want to build.
Every single stage involves a new "controller", which is more like an
abstraction on the Nix side that constructs the madness described in
276b72fb93.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is kinda stupid to do every little time the file is automatically
regenerated upstream. But let's see how often that happens and whether
it will become a major annoyance or not, and if yes, we might be forced
to include it in our source tree.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This SSH key is specifically only for accessing the installed Cygwin
within the Windows VM, so we only need to expose the private key. Yes,
you heard right, the private key. It's not security-relevant because the
machine is completely read-only, only exposed to the filesystem and
networking is not available.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
At least the largest portion of the installer, because in the end we
don't want the installer to *actually* save the state but only prepare
the base image.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
After quite a lot of fighting against Windows and its various
limitations, this new is the base architecture for installing and
accessing the Windows VM and thus the Cygwin environment inside it:
.------------.
.---> | vde_switch |
| `-[#]----[#]-'
| | |
,' .' `---.___
,' 192.168.0.1 `.
| | 192.168.0.2
,' _____[#]____ |
,' | | ______[#]______
| | Windows VM | | .--' |
| |____________| | | |
| | /|\ | .-| |
| .---------. | | | | | |
.-|-| manager |-' | | | | |
| | `---------' | | | | |
| | | | | | |
| | .-------------. | | Samba |
| | | BOOTSTRAP | | | | | |
| | |-------------| | | | | .------|
| `-| spawn VMs |-+--> | | `---| xchg | <-------.
| |-------------| | | .---^------| |
| | install |---. | `-| nixstore | <----. |
| |-------------| | | `----------| | |
|---| suspend VM | | | | | |
| `------.------' | | Controller VM | | |
| | | |_______________| | |
| .--' | /|\ VirtIO
| | __|__________:____________ | |
| \|/ | | `. | | |
| .------------. | | : | | |
| | REAL BUILD | | | .-------^--------. | | |
| |------------| | `-> | serial console | | | |
`-| revive VM | | `----------------' | | |
|------------| |------------. | | |
| build |-->| /nix/store >>>-----------|-' |
|------------| |------------| | |
| collect |<--| xchg >>>-----------|----'
`-----.------' |------------' |
| | |
\|/ | | | __ ___ | |
| |--| | | (__ -|- |
F I N I S H E D | | | |__| ___) | |
|__________________________|
This might look a bit overwhelming, but let me try to explain:
We're starting at the base derivation ("BOOTSTRAP" above), where we
actually install the Cygwin envirenment. Over there we basically fire up
a vde_switch process and two virtual machines: One is the Windows
machine, the other is a NixOS machine, which serves as some kind of
proxy between the host and the Windows machine.
The reason we're doing this, is because we don't have a lot of options
for sharing files between a stock Windows machine and the host. In
earlier experiments, I've tried to communicate with the Windows guest by
using pipes and OpenSSH, but obviously this wasn't a big speed rush (or
to say it bluntly: It was fucking slow).
Using TCP/IP directly for accessing the guest would have been another
option, but it could lead to possible errors when the port or a range of
ports are in use at the Host system. Also, we would need to punch a hole
into the sandbox of the Nix builder (as it doesn't allow networking),
which in turn will possibly undermine deterministic builds/runs (well,
at least as deterministic as it can be, we're running Windows,
remember?).
So, let's continue: The responsibility of the NixOS (controller) VM is
to just wait until an SSH port becomes available on the Windows VM,
whereas the Windows VM itself is installed using an unattended
installation file provided via a virtual floppy image.
With the installation of the basic Windows OS, we directly install
Cygwin and start up an OpenSSH service.
At this point the bootstrapping is almost finished and as soon as the
port is available, the controller VM sets up Samba shares and makes it
available as drive letters within Windows and as bind mounts (for
example /nix/store) within Cygwin.
Finally we're making a snapshot of the memory of the Windows VM in order
to revive it within a few seconds when we want to build something.
Now, the build process itself is fairly straightforward: Revive VM and
build based on existing store derivations and collect the result _and_
the exit code from the xchg share/directory.
Conclusion: This architecture may sound a bit complicated, but we're
trying to achieve deterministic and reproducable builds and/or test
runs.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
At least for x86_64-w64-mingw32, it doesn't make sense to use the native
strip tool for stripping of symbols. To the contrary it results in
unusable archive files.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Stdenv adapters are kinda weird and un-idiomatic (especially when they
don't actually change stdenv). It's more idiomatic to say
buildInputs = [ makeCoverageAnalysisReport ];
This is useful for non-Autoconf-based packages, since GNU Make's
default for CXX is "g++". (The CC default is "cc" so should work fine
with Clang already.)
Some packages in the llvm suite (e.g. compiler-rt) cannot be built
separate from the build of llvm, and while some others (e.g. clang) can
the combined build is much better tested (we've had to work around
annoying issues before). So this puts llvm, clang, clang-tools-extra,
compiler-rt, lld, lldb, and polly all into one big build (llvmFull).
This build includes a static llvm, as dynamic is similarly less tested
and has known failures.
This also updates libc++ and dragonegg. libc++ now builds against
libc++abi as a separate package rather than building it during the
libc++ build.
The clang purity patch is gone. Instead, we simply set --sysroot to
/var/empty for pure builds, as all impure paths are either looked up in
the gcc prefix (which we hard-code at compile time) or in the sysroot.
This also means that if NIX_ENFORCE_PURITY is 0 then clang will look in
the normal Linux paths by default, which is the proper behavior IMO.
polly required an updated isl. When stdenv-updates is merged, perhaps we
can update the isl used by gcc and avoid having two versions.
Since llvm on its own is now separate from the llvm used by clang, I've
removed myself as maintainer from llvm and will leave maintenance of
that to those who are interested in llvm separate from clang.
Signed-off-by: Shea Levy <shea@shealevy.com>
Install names need to be absolute paths, otherwise programs that link
against the dylib won't work without setting $DYLD_LIBRARY_PATH. Most
packages do this correctly, but some (like Boost and ICU) do not.
This setup hook absolutizes all install names.
nix-prefetch-git does not convert relative submodule urls into absolute
urls based on the parent's origin. This patch adds support for
repositories which are using the relative url syntax.
All JARs in $pkg/share/java (for each $pkg in the build inputs) are
added to $CLASSPATH. Thus, you can say
buildInputs = [ setJavaClassPath someJavaDependency ];
and the JARs in someJavaDependency will be found automatically by
tools like javac or ant.
Note that the manual used to say that JARs should be installed in
lib/java; this is now share/java, following the Debian policy:
http://www.debian.org/doc/packaging-manuals/java-policy/x110.html
The directory share/java makes more sense because JARs are
architecture-independent. (Also, a quick grep shows that we were not
exactly consistent about this in Nixpkgs.)
disabled by setting 'strictConfigurePhase' to 'false'
This is necessary for some packages, like dns, because cabal warns about
multiple versions of the same dependency being used, but the usage is fine,
actually, so we want the build to succeed. Packages that depend on 'doctest'
also have this issue <https://github.com/sol/doctest-haskell/issues/69>.
Before this commit, if a haskell library X depends on Y, and X was added to
systemPackages, only X would be available in the user environment. Y
would not be avialable, which causes X to be broken. This commit solves
the issue by setting propagatedUserEnvPkgs to all packages X depends
on when X is a library.
This adds nix-run, which is a thin wrapper around nix-build.
nix-run calls nix-build, and then executes the resulting build.
If no executable artifact is built, nix-runs outputs an error
message.
myEnvRun calls myEnvFun and builds a script that directly runs
the load-env-* script.
Together, nix-run and myEnvRun allows you to set up an environment
that can be loaded in this way:
envs.nix:
{
gcc = myEnvRun {
name = "gcc";
buildInputs = [ gcc ];
};
}
$ nix-run -A gcc envs.nix
You end up directly in your environment without having to do
nix-env -i. You will always have a fresh environment and you
don't have to pollute you profile with a lot of env packages.
The nix-prefect git script was broken when trying to parse certain
groups of submodules. This patch fixes the url detection for submodule
repositories to use the more reliable `git config` commands.