This should now point to the path for the tkabber plugins package, which will be
used as soon as the tkabber-plugins derivation is available as a symlink in the
user's environment.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The tkabber plugins really do not require a dependency on tkabber itself, so
let's drop it. In addition, this also removes creating a $out/bin dir, which was
left back then when creating the tkabber-plugins derivation by copy & pasting
stuff from the main tkabber derivation.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This should make things a lot more DRY as we now can generalize library paths by
using the libPrefix attribute of each library. In addition this also cuts the
line length in wrapProgram.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This ensures that Tkabber can now be used with GPG support, though as of gnupg
version 2, this requires gpg-agent as well. Only if all conditions are met, an
option to actually use GPG will show up in Tkabber's settings.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is what I forgot in the packages I have added a few months ago, so it's
time to revisit them and improve things, like for example set the right
libPrefix in order to stay consistent with other TCL libraries.
In addition this fixes some whitespace ugliness in the affected packages.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
* Use more system libraries
* Enable KDE4 desktop integration
* Split preparation between postUnpack, patchPhase and preConfigure
Viric, feel free to revert (parts of) this commit.
This enables legacy seccomp sandbox by default even on chromium 22, because the
BPF sandbox is still work in progress, please see:
http://crbug.com/139872http://crbug.com/130662
Because the BPF seccomp sandbox is used in case the legacy seccomp mode
initialization fails, we might need to patch this again, as soon as the BPF
sandbox is fully implemented to fall back to legacy seccomp and use BPF by
default.
We now have two patches for "default to seccomp" - one for Chromium 21 and one
for 22 or higher.
The patch doesn't apply in version 22 and newer, because mode 1 sandboxes are
connsidered "legacy" (well, apart from the fact that I'd personally prefer BPF
anyway), for reasons I wasn't able to find, yet. But let's proceed on BPF
integration and thus gain more insight on the exact reasons.
If you look at what changed, you'll surely notice that version 22 is now in
beta, so we have to expect things to break. And one thing that will break for
sure is the seccomp patch, because beginning with 22 the new BPF seccomp sandbox
is going to replace the mode 1 seccomp sandbox.
This commit doesn't add any feature and just fixes a small annoyance which
result in messages like this:
Checking if xxx applies...no.
See that there is no whitespace between "..." and "no"? Well, the world cares
for more important things, but for me personally those minor annoyances can turn
into major annoyances.
chromium: Improve update script and update to latest versions.
Previously, we had a single hash of the whole version response from
omahaproxy.
Unfortunately the dev version is released quite frequently, so the hash
is of no use at all (we could rather directly fetch rather than
executing the script, because it will fetch all channels anyway).
This pull request adds two methods of caching:
* First of all, if a perticular version/channel is already in the
previous version of the sources.nix file, don't download it again.
* And the second method is to check if the current sha256 is already
downloaded and reads the corresponding sha256 from the lookup table.
So, this should really help to avoid flooding the download servers and
to not stress impatient users too much.
So, now even Firefox can be built with our shiny new fixed up NSS derivation,
and as this is desired (especially if we want to support certificates from the
CA bundle), let's make it the default.
Hurray! This is the first time chromium is working with NSS _and_ is able to
verify certificates using the root certificates built in into NSS.
Optimally it would use certs from OPENSSL_X509_CERT_FILE, but at least it's
working, so let's add that at some later point.
virtualbox: Fix build for manual kernel.
This should fix building VirtualBox against kernels made using the new
manual kernel configuration system.
This has been tested with the standard nixpkgs kernel as well.
First of all, modules won't install when there is no "make modules" prior to it,
so we're doing this now with a new function called forEachModule, so we can
avoid duplication as much as possible.
In addition this sets $sourcedir to the current directory of the configurePhase,
so we're able to find the source tree later on, after several chdir()s.
The scripts/depmod.sh checks whether the path in $DEPMOD is executable and only
executes it if that's the case. So, by setting DEPMOD to "/do_not_use_depmod"
the destination path doesn't exist _and_ thus isn't executable aswell.
The for loop didn't find $curdir, because it was set _after_ the directory has
been changed. The variable is now called $srcroot and is set before the
installPhase is changing directories.
Don't rely on VirtualBox's in-tree build scripts to set include paths correctly
and use the official way of the Linux kernel to build the modules. That way we
don't need to make ugly symlinks in the kernel tree or heavily patch VirtualBox.
Until this commit we had a single hash of the whole version response from
omahaproxy. This worked well for not updating unnecessarily but only until one
single channel has a new version available.
Unfortunately the dev version is released quite frequently, so the hash is of no
use at all (we could rather directly fetch everything everytime we execute the
script).
This led to this commit, which adds two methods of caching:
First of all, if a perticular version/channel is already in the previous version
of the sources.nix file, don't download it again.
And the second method is to check if the current sha256 is already downloaded
and reads the corresponding sha256 from the lookup table.
So, this should really help to avoid flooding the download servers and to not
stress impatient users too much.