Also fix the hash in goPackages.inflect, the only user of the fetcher ATM.
Closes #12002 (different `inflect` fix), fixes #12012.
Using fetchzip-derived functions is likely more efficient than fetchhg,
and it's lighter on dependencies (hash is the same as with fetchhg in this case).
fetchbzr always uses the derivation name `bzr-export`. nix-prefetch-bzr
should use the same name for its output. This avoids duplicate downloads
and problems with forbidden characters in bazaar repository names.
Fixes #10819. emacsWithPackages will know its own package set. This
requires it to be in a package set, rather than at the top level, so it
lives in emacsPackagesNg.
This seems to be the root cause of the random page allocation failures
and @wizeman did a very good job on not only finding the root problem
but also giving a detailed explanation of it in #10828.
Here is an excerpt:
The problem here is that the kernel is trying to allocate a contiguous
section of 2^7=128 pages, which is 512 KB. This is way too much:
kernel pages tend to get fragmented over time and kernel developers
often go to great lengths to try allocating at most only 1 contiguous
page at a time whenever they can.
From the error message, it looks like the culprit is unionfs, but this
is misleading: unionfs is the name of the userspace process that was
running when the system ran out of memory, but it wasn't unionfs who
was allocating the memory: it was the kernel; specifically it was the
v9fs_dir_readdir_dotl() function, which is the code for handling the
readdir() function in the 9p filesystem (the filesystem that is used
to share a directory structure between a qemu host and its VM).
If you look at the code, here's what it's doing at the moment it tries
to allocate memory:
buflen = fid->clnt->msize - P9_IOHDRSZ;
rdir = v9fs_alloc_rdir_buf(file, buflen);
If you look into v9fs_alloc_rdir_buf(), you will see that it will try
to allocate a contiguous buffer of memory (using kzalloc(), which is a
wrapper around kmalloc()) of size buflen + 8 bytes or so.
So in reality, this code actually allocates a buffer of size
proportional to fid->clnt->msize. What is this msize? If you follow
the definition of the structures, you will see that it's the
negotiated buffer transfer size between 9p client and 9p server. On
the client side, it can be controlled with the msize mount option.
What this all means is that, the reason for running out of memory is
that the code (which we can't easily change) tries to allocate a
contiguous buffer of size more or less equal to "negotiated 9p
protocol buffer size", which seems to be way too big (in our NixOS
tests, at least).
After that initial finding, @lethalman tested the gnome3 gdm test
without setting the msize parameter at all and it seems to have resolved
the problem.
The reason why I'm committing this without testing against all of the
NixOS VM test is basically that I think we can only go better but not
worse than the current state.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Previously is was assumed that bash was in the path when calling the
environment setup script. This changes all of the references of bash to
be absolute paths so that the user doesn't have to worry about the
environment they call it with.
Otherwise, if the upstream mirror changes (rather than deletes) a
file, then tarballs.nixos.org won't be used even if it has a copy of
the original file, and so we'll get a hash mismatch.
Emacs packages are commonly distributed as single .el files. This
unpackCmd handles them correctly and sets up sourceRoot. Other sources
are treated in the default manner.
If "fetcher" is a string, then Nix will execute it with bash already, so
the additional bash argument in that string was redundant and apparently
causes trouble on non-Linux platforms.
Hopefully fixes https://github.com/NixOS/nixpkgs/issues/11496.
The list we had before contained a lot of junk, i.e. sites that were no
longer online or no longer in sync. The new list of sites comes from
https://gnupg.org/download/index.html.
The most complex problems were from dealing with switches reverted in
the meantime (gcc5, gmp6, ncurses6).
It's likely that darwin is (still) broken nontrivially.
It turns out that cargo implicitly depends on rustc at runtime: even
`cargo help` will fail if rustc is not in the PATH.
This means that we need to wrap the cargo binary to add rustc to PATH.
However, I have opted into doing something slightly unusual: instead of
tying down a specific cargo to use a specific rustc (i.e., wrap cargo so
that "${rustc}/bin" is prefixed into PATH), instead I'm adding the rustc
used to build cargo as a fallback rust compiler (i.e., wrap cargo so
that "${rustc}/bin" is suffixed into PATH). This means that cargo will
prefer to use a rust compiler that is in the default path, but fallback
into the one used to build cargo only if there wasn't any rust compiler
in the default path.
The reason I'm doing this is that otherwise it could cause unexpected
effects. For example, if you had a build environment with the
rustcMaster and cargo derivations, you would expect cargo to use
rustcMaster to compile your project (since rustcMaster would be the only
compiler available in $PATH), but this wouldn't happen if we tied down
cargo to use the rustc that was used to compile it (because the default
cargo derivation gets compiled with the stable rust compiler).
That said, I have slightly modified makeRustPlatform so that a rust
platform will always use the rust compiler that was used to build cargo,
because this prevents mistakenly depending on two different versions of
the rust compiler (stable and unstable) in the same rust platform,
something which is usually undesirable.
Fixes #11053
The fetch-cargo-deps script is written in bash syntax, but it
erroneously ran under the /bin/sh interpreter.
This wasn't noticed because /bin/sh is actually bash in NixOS, but on
some other systems this is not true.
... because cc-wrapper is meant to propagate man pages into user envs,
and info pages are rather large.
Also replace the duplicate g++ and gcc man1 pages by a symlink.
Note: -B argument seems more like for gcc's main output,
though it's used in a bit strange way here.
(Upstream default is /usr/lib/gcc/ which we don't move.)
Now any developer docs are removed by default, unless "docdev"
is in $outputs or $outputDocdev is defined.
Currently devdoc consists of just man3 and gtk-doc.
While debugging an issue with running NixOps tests, I found out that the
output from debClosureGenerator is not deterministic.
The reason behind this is the way how Provides and Replaces fields are
handled. I haven't yet found out what's the exact issue, but so far
packages "Provides" are more or less picked at random.
So, running the NixOps Hetzner tests we get either mawk, original-awk or
gawk altering on every invocation.
While for the test it isn't poisionous whether wi have mawk or gawk,
having original-awk certainly is, because live-build only works with
mawk or gawk.
The best solution would obviously be to make debClosureGenerator
deterministic, but in the case of "Provides: awk", we can safely pick
mawk by default, because the latter has a "Priority: required" in its
package description.
This also has the advantage that we can safely cherry-pick this to
release-15.09 because it's very unlikely that we'll break the
debClosureGenerator by adding a dependency to commonDebPackages.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This should avoid accidential expansion of variables, i.e. in
"export PATH=/some/path:$PATH"
$PATH would have been expanded in the environment builder!
'[[ ! -v "$propagatedOutputs" ]]' is incorrect and always evaluates to
true. The correct form using double brackets would be
'[[ ! -v propagatedOutputs ]]', but I strongly dislike '[[ ]]' due to
the totally different quoting rules compared to everything else in bash.
For practical purposes, here are the changes in behavior:
- When fetching from a subdirectory of a repo, do not rebuild because of
changes elsewhere in the repo
- Fetch (not-ignored) untracked files too
It does this by letting git hash and export the directory in question,
which I believes makes for a cleaner implementation than the ad-hoc copying
and hashing that was there before.
Close #9790.
This fixes checkouting for a nasty combination:
1. To be checkouted is a revision which corresponds to tag in a form "<tag>^{}".
2. This revision is not fetched by default.
You can now pass
separateDebugInfo = true;
to mkDerivation. This causes debug info to be separated from ELF
binaries and stored in the "debug" output. The advantage is that it
enables installing lean binaries, while still having the ability to
make sense of core dumps, etc.
Fixes #9044, close #9667. Thanks to @taku0 for suggesting this solution.
Now we have no modes starting with `/` or `+`.
Rewrite the `-perm` parameters of find:
- completely safe: rewrite `/0100` and `+100` to `-0100`,
- slightly semantics-changing: rewrite `+111` to `-0100`.
I cross-verified the `find` manual pages for Linux, Darwin, FreeBSD.
Upstream likes to move "old" releases to an archive mirror as soon as a
new one is released. This is now handled for free by mirrors.nix.
(No idea why cs.utah.edu was used to begin with; it's now added to
mirrors.nix. Note that it doesn't support SSL, but that applies to
several others so I don't see the harm.)
By default `makeWrapper` will not set argv[0] (this is a reversion to
the old default behavior). Based on the breakage we have seen from
changing the default, this is what most people want. The `wrapProgram`
function will send `--argv0 '"$0"'` to `makeWrapper`, i.e. it will
continue to pass-through the argv[0] that the wrapper is called with.
Also, in some cases, the result of fetchBower is different depending on the
value of $out. For now, it seems that it works best if using a local output
directory before copying to $out.
(cherry picked from commit aa4c6b027163abe0891f9ad438899f9679298a64)
Comes in handy if we want to make additional modificiations to the
output file. While I wasn't sure whether to invoke the passed postFetch
directly before the patch or afterwards, I thought it would be better
afterwards because "postFetch of fetchpatch" at least to my intuition
would sound that after whatever "fetchpatch" does - it comes afterwards.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
- Update the instructions for re-generating each of the package set files.
- Provide test-evaluation.nix expression to verify that the package sets evaluates.
- Update list of known broken packages.
- fetchNuGet can fetch binaries from nuget servers
- buildDotnetPackage can build .NET packages using mono/xbuild
- Places nuget & paket as they would clash with nix
- Patch project files because F# targets are expected to be found in
the mono directory (and we know that's not going to happen on nix)
- Find DLLs that were copied from buildInputs and replace by symlink
for sharing
- Export produced DLL via the pkg-config mechanism
- Create wrappers for produced EXEs
- Repackaged this new infrastructure: keepass, monodevelop
- Newly packaged: ExtCore, UnionArgParser, FSharp.Data, Paket, and a
bunch more..
This is a combination of 73 commits.
The point of this is to be able to do `meta.homepage = src.meta.homepage;`
instead of the usual copy-paste for the packages that are hosted
on these hosting services.
Instead it is provided to the user who can choose whether or not
to include it in the final derivati. Example of including would
be:
```nix
callPackage ... (self: { inherit (self.extras) extraThing; })
```
These extras are also available downstream without being built by
default. This is achieved with `passthru`.
- Only they are added to the optional build path (share/agda)
- Only they are are passed as an include dir (share/agda)
- Only they are propigatedBuildInputs
Instead, discover it automatically when building the package.
This makes `buildRustPackage` more future-proof with respect to changes
in how `cargo` generates the hash.
Also, it fixes broken builds in i686 because apparently, cargo generates
a different registry index hash in this architecture (compared to
x86-64).
This is unused, future users can just use override `buildFlags`
and extend/replace as needed. `includeDirs` is provided for this
purpose.
We should add `dirOf self.everythingFile` rather than `.`, but
`dirOf` breaks on relative paths so that is not an option.
This reverts d927da8dae. Having a copy
of gcc-wrapper/setup-hook.sh is bad for maintainability - it had
already started to diverge. Also, gccStdInc gave a nix-env conflict
with the standard gcc. And it wasn't actually used in Nixpkgs.
Instead, if you really need to change "-isystem" to "-I", you can now
set ccIncludeFlag to "-I".
This makes buildRustPackage portable to non-Linux platforms.
Additionally, now we also save the `Cargo.lock` file into the fetch output, so
that we don't have to run $cargoUpdateHook again just before building.
... in a more generic way.
With this commit, if you need to patch a registry package to make it
work with Nix, you just need to add a script to patch-registry-deps
in the same style as the `pkg-config` script.
Instead, move that code into buildRustPackage.
The setup hook was only doing part of the work anyway, and having it in
a separate place was obscuring what was really going on.
Emacs will call package-initialize itself, if required, or the user will
call it in their initialization file. There is no reason to call it in
the wrapper and doing so only increases start-up time.
It turns out that `cargo`, with respect to registry dependencies, was
ignoring the package versions locked in `Cargo.lock` because we changed
the registry index URL.
Therefore, every time `rustRegistry` would be updated, we'd always try
to use the latest version available for every dependency and as a result
the deps' SHA256 hashes would almost always have to be changed.
To fix this, now we do a string substitution in `Cargo.lock` of the
`crates.io` registry URL with our URL. This should be safe because our
registry is just a copy of the `crates.io` registry at a certain point
in time.
Since now we don't always use the latest version of every dependency,
the build of `cargo` actually started to fail because two of the
dependencies specified in its `Cargo.lock` file have build failures.
To fix the latter problem, I've added a `cargoUpdateHook` variable that
gets ran both when fetching dependencies and just before building the
program. The purpose of `cargoUpdateHook` is to do any ad-hoc updating
of dependencies necessary to get the package to build. The use of the
'--precise' flag is needed so that cargo doesn't try to fetch an even
newer version whenever `rustRegistry` is updated (and therefore have to
change depsSha256 as a consequence).
This is useful when `leaveDotGit = true` and some other derivation
expects some branch name to exist.
Previously, `nix-prefetch-git` always created a branch with a
hard-coded name (`fetchgit`).
Now development stuff is propagated from the first output,
and userEnvPkgs from the one with binaries.
Also don't move *.la files (yet). It causes problems, and they're small.
- there were many easy merge conflicts
- cc-wrapper needed nontrivial changes
Many other problems might've been created by interaction of the branches,
but stdenv and a few other packages build fine now.
This patch resolves https://github.com/NixOS/nixpkgs/issues/6395. Deep
cloning is useful in combination with 'leaveDotGit' for builds that want
to run "git describe" to obtain a proper version string, etc., like the
'haskellngPackages.cabal2nix' package does.
This simplifies melpa builder by merging with it my old emacs modes builder,
adds better instructions and support for overrides in emacs-packages.nix,
and renames some emacs-related stuff in all-packages.nix for sanity reasons.
I declare this backwards compatible since direct uses of emacsPackages in
configuration.nix are very unlikely.
- Add a conditional flag for the c++ std lib
- Build binaries that get linked by our own dyld (someday)
- Automatically add framework directories in the setup hook
- nix-template-rpm can now split the generated templates into
a static part that goes into the nixpkgs tree
a dynamic part that can be updated easily to track the rpm spec files
- add lookup mechanism for package names and package paths
- add mechanism to update existing nix-expression with new download files
These packages, and maybe some more include unix.h for some reason.
Creating that file makes them build, and in the case of xdebug also
appear to work.
We now have an alternate setup hook for gcc-wrapper that uses -I to add
include paths rather than -isystem. The latter flag can change the
search order specified by the build system. For KDE 5 packages, we don't
want that!
The default setup-hook for gcc-wrapper adds include directories with
-isystem, which upsets the order -I flags are processed. This adds an
alternative setup-hook that only uses -I flags. The build system's
ordering of -I flags is then respected. This is important when different
packages provide includes with the same name, such as building packages
that depend on Qt4 and Qt5.
This resolves a regression introduced in fc01353703, where providing a
name without a proper extension breaks existing uses of fetchzip (they
now fail to unpack). Of particular note, that commit broke all uses of
fetchFromGitHub because it uses a name like so: "${repo}-${rev}-src"
Fixes #5954