3
0
Fork 0
forked from mirrors/nixpkgs

Merge master into haskell-updates

This commit is contained in:
github-actions[bot] 2021-11-06 00:06:48 +00:00 committed by GitHub
commit 1309d8da51
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
203 changed files with 2869 additions and 1273 deletions

View file

@ -53,7 +53,7 @@ system, [Hydra](https://hydra.nixos.org/).
Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are
met, the Nixpkgs expressions are distributed via [Nix
channels](https://nixos.org/nix/manual/#sec-channels).
channels](https://nixos.org/manual/nix/stable/package-management/channels.html).
# Contributing

View file

@ -426,7 +426,7 @@ you of the correct hash.
* `maturinBuildHook`: use [Maturin](https://github.com/PyO3/maturin)
to build a Python wheel. Similar to `cargoBuildHook`, the optional
variable `buildAndTestSubdir` can be used to build a crate in a
Cargo workspace. Additional maturin flags can be passed through
Cargo workspace. Additional Maturin flags can be passed through
`maturinBuildFlags`.
* `cargoCheckHook`: run tests using Cargo. The build type for checks
can be set using `cargoCheckType`. Additional flags can be passed to
@ -447,7 +447,7 @@ dependencies. The build itself is then performed by
The following example outlines how the `tokenizers` Python package is
built. Since the Python package is in the `source/bindings/python`
directory of the *tokenizers* project's source archive, we use
directory of the `tokenizers` project's source archive, we use
`sourceRoot` to point the tooling to this directory:
```nix
@ -729,7 +729,7 @@ with import <nixpkgs> {};
Actually, the overrides introduced in the previous section are more
general. A number of other parameters can be overridden:
- The version of rustc used to compile the crate:
- The version of `rustc` used to compile the crate:
```nix
(hello {}).override { rust = pkgs.rust; };
@ -742,7 +742,7 @@ general. A number of other parameters can be overridden:
(hello {}).override { release = false; };
```
- Whether to print the commands sent to rustc when building
- Whether to print the commands sent to `rustc` when building
(equivalent to `--verbose` in cargo:
```nix
@ -883,11 +883,11 @@ detailed usage.
Fenix is an alternative to `rustup` and can also be used as an overlay.
Both Oxalica's overlay and fenix better integrate with nix and cache optimizations.
Both oxalica's overlay and fenix better integrate with nix and cache optimizations.
Because of this and ergonomics, either of those community projects
should be preferred to the Mozilla's Rust overlay (nixpkgs-mozilla).
should be preferred to the Mozilla's Rust overlay (`nixpkgs-mozilla`).
### How to select a specific rustc and toolchain version {#how-to-select-a-specific-rustc-and-toolchain-version}
### How to select a specific `rustc` and toolchain version {#how-to-select-a-specific-rustc-and-toolchain-version}
You can consume the oxalica overlay and use it to grab a specific Rust toolchain version.
Here is an example `shell.nix` showing how to grab the current stable toolchain:

View file

@ -112,7 +112,7 @@ self: super:
This overlay uses Intel's MKL library for both BLAS and LAPACK interfaces. Note that the same can be accomplished at runtime using `LD_LIBRARY_PATH` of `libblas.so.3` and `liblapack.so.3`. For instance:
```ShellSession
$ LD_LIBRARY_PATH=$(nix-build -A mkl)/lib:$LD_LIBRARY_PATH nix-shell -p octave --run octave
$ LD_LIBRARY_PATH=$(nix-build -A mkl)/lib${LD_LIBRARY_PATH:+:}$LD_LIBRARY_PATH nix-shell -p octave --run octave
```
Intel MKL requires an `openmp` implementation when running with multiple processors. By default, `mkl` will use Intel's `iomp` implementation if no other is specified, but this is a runtime-only dependency and binary compatible with the LLVM implementation. To use that one instead, Intel recommends users set it with `LD_PRELOAD`. Note that `mkl` is only available on `x86_64-linux` and `x86_64-darwin`. Moreover, Hydra is not building and distributing pre-compiled binaries using it.

View file

@ -4201,6 +4201,12 @@
githubId = 1713676;
name = "Luis G. Torres";
};
GKasparov = {
email = "mizozahr@gmail.com";
github = "GKasparov";
githubId = 60962839;
name = "Mazen Zahr";
};
gleber = {
email = "gleber.p@gmail.com";
github = "gleber";
@ -9384,6 +9390,12 @@
githubId = 52847440;
name = "Ryan Burns";
};
r3dl3g = {
email = "redleg@rothfuss-web.de";
github = "r3dl3g";
githubId = 35229674;
name = "Armin Rothfuss";
};
raboof = {
email = "arnout@bzzt.net";
matrix = "@raboof:matrix.org";

View file

@ -159,6 +159,10 @@ The following methods are available on machine objects:
`execute`
: Execute a shell command, returning a list `(status, stdout)`.
If the command detaches, it must close stdout, as `execute` will wait
for this to consume all output reliably. This can be achieved by
redirecting stdout to stderr `>&2`, to `/dev/console`, `/dev/null` or
a file.
Takes an optional parameter `check_return` that defaults to `True`.
Setting this parameter to `False` will not check for the return code
and return -1 instead. This can be used for commands that shut down
@ -179,6 +183,8 @@ The following methods are available on machine objects:
- Dereferencing unset variables fail the command.
- It will wait for stdout to be closed. See `execute`.
`fail`
: Like `succeed`, but raising an exception if the command returns a zero

View file

@ -266,7 +266,12 @@ start_all()
<listitem>
<para>
Execute a shell command, returning a list
<literal>(status, stdout)</literal>. Takes an optional
<literal>(status, stdout)</literal>. If the command detaches,
it must close stdout, as <literal>execute</literal> will wait
for this to consume all output reliably. This can be achieved
by redirecting stdout to stderr <literal>&gt;&amp;2</literal>,
to <literal>/dev/console</literal>,
<literal>/dev/null</literal> or a file. Takes an optional
parameter <literal>check_return</literal> that defaults to
<literal>True</literal>. Setting this parameter to
<literal>False</literal> will not check for the return code
@ -306,6 +311,12 @@ start_all()
Dereferencing unset variables fail the command.
</para>
</listitem>
<listitem>
<para>
It will wait for stdout to be closed. See
<literal>execute</literal>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>

View file

@ -50,6 +50,29 @@
guide</link> is available.
</para>
</listitem>
<listitem>
<para>
Improvements have been made to the Hadoop module and package:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
HDFS and YARN now support production-ready highly
available deployments with automatic failover.
</para>
</listitem>
<listitem>
<para>
Hadoop now defaults to Hadoop 3, updated from 2.
</para>
</listitem>
<listitem>
<para>
JournalNode, ZKFS and HTTPFS services have been added.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Activation scripts can now opt int to be run when running
@ -423,6 +446,23 @@
<section xml:id="sec-release-21.11-incompatibilities">
<title>Backward Incompatibilities</title>
<itemizedlist>
<listitem>
<para>
The NixOS VM test framework,
<literal>pkgs.nixosTest</literal>/<literal>make-test-python.nix</literal>,
now requires non-terminating commands such as
<literal>succeed(&quot;foo &amp;&quot;)</literal> to close
stdout. This can be done with a redirect such as
<literal>succeed(&quot;foo &gt;&amp;2 &amp;&quot;)</literal>.
This breaking change was necessitated by a race condition
causing tests to fail or hang. It applies to all methods that
invoke commands on the nodes, including
<literal>execute</literal>, <literal>succeed</literal>,
<literal>fail</literal>,
<literal>wait_until_succeeds</literal>,
<literal>wait_until_fails</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>services.wakeonlan</literal> option was removed,
@ -1777,6 +1817,39 @@ Superuser created successfully.
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
The
<link xlink:href="options.html#opt-services.unifi.enable">services.unifi</link>
module has been reworked, solving a number of issues. This
leads to several user facing changes:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
The <literal>services.unifi.dataDir</literal> option is
removed and the data is now always located under
<literal>/var/lib/unifi/data</literal>. This is done to
make better use of systemd state direcotiry and thus
making the service restart more reliable.
</para>
</listitem>
<listitem>
<para>
The unifi logs can now be found under:
<literal>/var/log/unifi</literal> instead of
<literal>/var/lib/unifi/logs</literal>.
</para>
</listitem>
<listitem>
<para>
The unifi run directory can now be found under:
<literal>/run/unifi</literal> instead of
<literal>/var/lib/unifi/run</literal>.
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</section>
</section>

View file

@ -18,6 +18,11 @@ In addition to numerous new and upgraded packages, this release has the followin
- spark now defaults to spark 3, updated from 2. A [migration guide](https://spark.apache.org/docs/latest/core-migration-guide.html#upgrading-from-core-24-to-30) is available.
- Improvements have been made to the Hadoop module and package:
- HDFS and YARN now support production-ready highly available deployments with automatic failover.
- Hadoop now defaults to Hadoop 3, updated from 2.
- JournalNode, ZKFS and HTTPFS services have been added.
- Activation scripts can now opt int to be run when running `nixos-rebuild dry-activate` and detect the dry activation by reading `$NIXOS_ACTION`.
This allows activation scripts to output what they would change if the activation was really run.
The users/modules activation script supports this and outputs some of is actions.
@ -128,6 +133,10 @@ In addition to numerous new and upgraded packages, this release has the followin
## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
- The NixOS VM test framework, `pkgs.nixosTest`/`make-test-python.nix`, now requires non-terminating commands such as `succeed("foo &")` to close stdout.
This can be done with a redirect such as `succeed("foo >&2 &")`. This breaking change was necessitated by a race condition causing tests to fail or hang.
It applies to all methods that invoke commands on the nodes, including `execute`, `succeed`, `fail`, `wait_until_succeeds`, `wait_until_fails`.
- The `services.wakeonlan` option was removed, and replaced with `networking.interfaces.<name>.wakeOnLan`.
- The `security.wrappers` option now requires to always specify an owner, group and whether the setuid/setgid bit should be set.
@ -500,3 +509,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- Dokuwiki now supports caddy! However
- the nginx option has been removed, in the new configuration, please use the `dokuwiki.webserver = "nginx"` instead.
- The "${hostname}" option has been deprecated, please use `dokuwiki.sites = [ "${hostname}" ]` instead
- The [services.unifi](options.html#opt-services.unifi.enable) module has been reworked, solving a number of issues. This leads to several user facing changes:
- The `services.unifi.dataDir` option is removed and the data is now always located under `/var/lib/unifi/data`. This is done to make better use of systemd state direcotiry and thus making the service restart more reliable.
- The unifi logs can now be found under: `/var/log/unifi` instead of `/var/lib/unifi/logs`.
- The unifi run directory can now be found under: `/run/unifi` instead of `/var/lib/unifi/run`.

View file

@ -284,6 +284,10 @@ in
source = "${nvidia_x11.bin}/share/nvidia/nvidia-application-profiles-rc";
};
# 'nvidia_x11' installs it's files to /run/opengl-driver/...
environment.etc."egl/egl_external_platform.d".source =
"/run/opengl-driver/share/egl/egl_external_platform.d/";
hardware.opengl.package = mkIf (!offloadCfg.enable) nvidia_x11.out;
hardware.opengl.package32 = mkIf (!offloadCfg.enable) nvidia_x11.lib32;
hardware.opengl.extraPackages = optional offloadCfg.enable nvidia_x11.out;

View file

@ -35,6 +35,7 @@ pkgs.runCommand "hadoop-conf" {} ''
cp ${siteXml "hdfs-site.xml" cfg.hdfsSite}/* $out/
cp ${siteXml "mapred-site.xml" cfg.mapredSite}/* $out/
cp ${siteXml "yarn-site.xml" cfg.yarnSite}/* $out/
cp ${siteXml "httpfs-site.xml" cfg.httpfsSite}/* $out/
cp ${cfgFile "container-executor.cfg" cfg.containerExecutorCfg}/* $out/
cp ${pkgs.writeTextDir "hadoop-user-functions.sh" userFunctions}/* $out/
cp ${pkgs.writeTextDir "hadoop-env.sh" hadoopEnv}/* $out/

View file

@ -15,7 +15,10 @@ with lib;
"fs.defaultFS" = "hdfs://localhost";
}
'';
description = "Hadoop core-site.xml definition";
description = ''
Hadoop core-site.xml definition
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml"/>
'';
};
hdfsSite = mkOption {
@ -28,7 +31,10 @@ with lib;
"dfs.nameservices" = "namenode1";
}
'';
description = "Hadoop hdfs-site.xml definition";
description = ''
Hadoop hdfs-site.xml definition
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml"/>
'';
};
mapredSite = mkOption {
@ -44,7 +50,10 @@ with lib;
"mapreduce.map.java.opts" = "-Xmx900m -XX:+UseParallelGC";
}
'';
description = "Hadoop mapred-site.xml definition";
description = ''
Hadoop mapred-site.xml definition
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml"/>
'';
};
yarnSite = mkOption {
@ -67,7 +76,24 @@ with lib;
"yarn.resourcemanager.hostname" = "''${config.networking.hostName}";
}
'';
description = "Hadoop yarn-site.xml definition";
description = ''
Hadoop yarn-site.xml definition
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml"/>
'';
};
httpfsSite = mkOption {
default = { };
type = types.attrsOf types.anything;
example = literalExpression ''
{
"hadoop.http.max.threads" = 500;
}
'';
description = ''
Hadoop httpfs-site.xml definition
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-hdfs-httpfs/httpfs-default.html"/>
'';
};
log4jProperties = mkOption {
@ -92,7 +118,10 @@ with lib;
"feature.terminal.enabled" = 0;
}
'';
description = "Yarn container-executor.cfg definition";
description = ''
Yarn container-executor.cfg definition
<link xlink:href="https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/SecureContainer.html"/>
'';
};
extraConfDirs = mkOption {
@ -118,7 +147,8 @@ with lib;
config = mkMerge [
(mkIf (builtins.hasAttr "yarn" config.users.users ||
builtins.hasAttr "hdfs" config.users.users) {
builtins.hasAttr "hdfs" config.users.users ||
builtins.hasAttr "httpfs" config.users.users) {
users.groups.hadoop = {
gid = config.ids.gids.hadoop;
};

View file

@ -17,11 +17,14 @@ in
{
options.services.hadoop.hdfs = {
namenode = {
enabled = mkOption {
enable = mkEnableOption "Whether to run the HDFS NameNode";
formatOnInit = mkOption {
type = types.bool;
default = false;
description = ''
Whether to run the HDFS NameNode
Format HDFS namenode on first start. This is useful for quickly spinning up ephemeral HDFS clusters with a single namenode.
For HA clusters, initialization involves multiple steps across multiple nodes. Follow [this guide](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html)
to initialize an HA cluster manually.
'';
};
inherit restartIfChanged;
@ -34,13 +37,7 @@ in
};
};
datanode = {
enabled = mkOption {
type = types.bool;
default = false;
description = ''
Whether to run the HDFS DataNode
'';
};
enable = mkEnableOption "Whether to run the HDFS DataNode";
inherit restartIfChanged;
openFirewall = mkOption {
type = types.bool;
@ -50,18 +47,51 @@ in
'';
};
};
journalnode = {
enable = mkEnableOption "Whether to run the HDFS JournalNode";
inherit restartIfChanged;
openFirewall = mkOption {
type = types.bool;
default = true;
description = ''
Open firewall ports for journalnode
'';
};
};
zkfc = {
enable = mkEnableOption "Whether to run the HDFS ZooKeeper failover controller";
inherit restartIfChanged;
};
httpfs = {
enable = mkEnableOption "Whether to run the HDFS HTTPfs server";
tempPath = mkOption {
type = types.path;
default = "/tmp/hadoop/httpfs";
description = ''
HTTPFS_TEMP path used by HTTPFS
'';
};
inherit restartIfChanged;
openFirewall = mkOption {
type = types.bool;
default = true;
description = ''
Open firewall ports for HTTPFS
'';
};
};
};
config = mkMerge [
(mkIf cfg.hdfs.namenode.enabled {
(mkIf cfg.hdfs.namenode.enable {
systemd.services.hdfs-namenode = {
description = "Hadoop HDFS NameNode";
wantedBy = [ "multi-user.target" ];
inherit (cfg.hdfs.namenode) restartIfChanged;
preStart = ''
preStart = (mkIf cfg.hdfs.namenode.formatOnInit ''
${cfg.package}/bin/hdfs --config ${hadoopConf} namenode -format -nonInteractive || true
'';
'');
serviceConfig = {
User = "hdfs";
@ -74,9 +104,10 @@ in
networking.firewall.allowedTCPPorts = (mkIf cfg.hdfs.namenode.openFirewall [
9870 # namenode.http-address
8020 # namenode.rpc-address
8022 # namenode. servicerpc-address
]);
})
(mkIf cfg.hdfs.datanode.enabled {
(mkIf cfg.hdfs.datanode.enable {
systemd.services.hdfs-datanode = {
description = "Hadoop HDFS DataNode";
wantedBy = [ "multi-user.target" ];
@ -96,8 +127,64 @@ in
9867 # datanode.ipc.address
]);
})
(mkIf cfg.hdfs.journalnode.enable {
systemd.services.hdfs-journalnode = {
description = "Hadoop HDFS JournalNode";
wantedBy = [ "multi-user.target" ];
inherit (cfg.hdfs.journalnode) restartIfChanged;
serviceConfig = {
User = "hdfs";
SyslogIdentifier = "hdfs-journalnode";
ExecStart = "${cfg.package}/bin/hdfs --config ${hadoopConf} journalnode";
Restart = "always";
};
};
networking.firewall.allowedTCPPorts = (mkIf cfg.hdfs.journalnode.openFirewall [
8480 # dfs.journalnode.http-address
8485 # dfs.journalnode.rpc-address
]);
})
(mkIf cfg.hdfs.zkfc.enable {
systemd.services.hdfs-zkfc = {
description = "Hadoop HDFS ZooKeeper failover controller";
wantedBy = [ "multi-user.target" ];
inherit (cfg.hdfs.zkfc) restartIfChanged;
serviceConfig = {
User = "hdfs";
SyslogIdentifier = "hdfs-zkfc";
ExecStart = "${cfg.package}/bin/hdfs --config ${hadoopConf} zkfc";
Restart = "always";
};
};
})
(mkIf cfg.hdfs.httpfs.enable {
systemd.services.hdfs-httpfs = {
description = "Hadoop httpfs";
wantedBy = [ "multi-user.target" ];
inherit (cfg.hdfs.httpfs) restartIfChanged;
environment.HTTPFS_TEMP = cfg.hdfs.httpfs.tempPath;
preStart = ''
mkdir -p $HTTPFS_TEMP
'';
serviceConfig = {
User = "httpfs";
SyslogIdentifier = "hdfs-httpfs";
ExecStart = "${cfg.package}/bin/hdfs --config ${hadoopConf} httpfs";
Restart = "always";
};
};
networking.firewall.allowedTCPPorts = (mkIf cfg.hdfs.httpfs.openFirewall [
14000 # httpfs.http.port
]);
})
(mkIf (
cfg.hdfs.namenode.enabled || cfg.hdfs.datanode.enabled
cfg.hdfs.namenode.enable || cfg.hdfs.datanode.enable || cfg.hdfs.journalnode.enable || cfg.hdfs.zkfc.enable
) {
users.users.hdfs = {
description = "Hadoop HDFS user";
@ -105,6 +192,12 @@ in
uid = config.ids.uids.hdfs;
};
})
(mkIf cfg.hdfs.httpfs.enable {
users.users.httpfs = {
description = "Hadoop HTTPFS user";
group = "hadoop";
isSystemUser = true;
};
})
];
}

View file

@ -17,13 +17,7 @@ in
{
options.services.hadoop.yarn = {
resourcemanager = {
enabled = mkOption {
type = types.bool;
default = false;
description = ''
Whether to run the Hadoop YARN ResourceManager
'';
};
enable = mkEnableOption "Whether to run the Hadoop YARN ResourceManager";
inherit restartIfChanged;
openFirewall = mkOption {
type = types.bool;
@ -34,13 +28,7 @@ in
};
};
nodemanager = {
enabled = mkOption {
type = types.bool;
default = false;
description = ''
Whether to run the Hadoop YARN NodeManager
'';
};
enable = mkEnableOption "Whether to run the Hadoop YARN NodeManager";
inherit restartIfChanged;
addBinBash = mkOption {
type = types.bool;
@ -62,7 +50,7 @@ in
config = mkMerge [
(mkIf (
cfg.yarn.resourcemanager.enabled || cfg.yarn.nodemanager.enabled
cfg.yarn.resourcemanager.enable || cfg.yarn.nodemanager.enable
) {
users.users.yarn = {
@ -72,7 +60,7 @@ in
};
})
(mkIf cfg.yarn.resourcemanager.enabled {
(mkIf cfg.yarn.resourcemanager.enable {
systemd.services.yarn-resourcemanager = {
description = "Hadoop YARN ResourceManager";
wantedBy = [ "multi-user.target" ];
@ -91,10 +79,11 @@ in
8030 # resourcemanager.scheduler.address
8031 # resourcemanager.resource-tracker.address
8032 # resourcemanager.address
8033 # resourcemanager.admin.address
]);
})
(mkIf cfg.yarn.nodemanager.enabled {
(mkIf cfg.yarn.nodemanager.enable {
# Needed because yarn hardcodes /bin/bash in container start scripts
# These scripts can't be patched, they are generated at runtime
systemd.tmpfiles.rules = [

View file

@ -9,25 +9,6 @@ let
${optionalString (cfg.maximumJavaHeapSize != null) "-Xmx${(toString cfg.maximumJavaHeapSize)}m"} \
-jar ${stateDir}/lib/ace.jar
'';
mountPoints = [
{
what = "${cfg.unifiPackage}/dl";
where = "${stateDir}/dl";
}
{
what = "${cfg.unifiPackage}/lib";
where = "${stateDir}/lib";
}
{
what = "${cfg.mongodbPackage}/bin";
where = "${stateDir}/bin";
}
{
what = "${cfg.dataDir}";
where = "${stateDir}/data";
}
];
systemdMountPoints = map (m: "${utils.escapeSystemdPath m.where}.mount") mountPoints;
in
{
@ -68,16 +49,6 @@ in
'';
};
services.unifi.dataDir = mkOption {
type = types.str;
default = "${stateDir}/data";
description = ''
Where to store the database and other data.
This directory will be bind-mounted to ${stateDir}/data as part of the service startup.
'';
};
services.unifi.openPorts = mkOption {
type = types.bool;
default = true;
@ -136,32 +107,11 @@ in
];
};
# We must create the binary directories as bind mounts instead of symlinks
# This is because the controller resolves all symlinks to absolute paths
# to be used as the working directory.
systemd.mounts = map ({ what, where }: {
bindsTo = [ "unifi.service" ];
partOf = [ "unifi.service" ];
unitConfig.RequiresMountsFor = stateDir;
options = "bind";
what = what;
where = where;
}) mountPoints;
systemd.tmpfiles.rules = [
"d '${stateDir}' 0700 unifi - - -"
"d '${stateDir}/data' 0700 unifi - - -"
"d '${stateDir}/webapps' 0700 unifi - - -"
"L+ '${stateDir}/webapps/ROOT' - - - - ${cfg.unifiPackage}/webapps/ROOT"
];
systemd.services.unifi = {
description = "UniFi controller daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ] ++ systemdMountPoints;
partOf = systemdMountPoints;
bindsTo = systemdMountPoints;
unitConfig.RequiresMountsFor = stateDir;
after = [ "network.target" ];
# This a HACK to fix missing dependencies of dynamic libs extracted from jars
environment.LD_LIBRARY_PATH = with pkgs.stdenv; "${cc.cc.lib}/lib";
# Make sure package upgrades trigger a service restart
@ -209,8 +159,27 @@ in
SystemCallErrorNumber = "EPERM";
SystemCallFilter = [ "@system-service" ];
# Required for ProtectSystem=strict
BindPaths = [ stateDir ];
StateDirectory = "unifi";
RuntimeDirectory = "unifi";
LogsDirectory = "unifi";
CacheDirectory= "unifi";
TemporaryFileSystem = [
# required as we want to create bind mounts below
"${stateDir}/webapps:rw"
];
# We must create the binary directories as bind mounts instead of symlinks
# This is because the controller resolves all symlinks to absolute paths
# to be used as the working directory.
BindPaths = [
"/var/log/unifi:${stateDir}/logs"
"/run/unifi:${stateDir}/run"
"${cfg.unifiPackage}/dl:${stateDir}/dl"
"${cfg.unifiPackage}/lib:${stateDir}/lib"
"${cfg.mongodbPackage}/bin:${stateDir}/bin"
"${cfg.unifiPackage}/webapps/ROOT:${stateDir}/webapps/ROOT"
];
# Needs network access
PrivateNetwork = false;
@ -220,6 +189,9 @@ in
};
};
imports = [
(mkRemovedOptionModule [ "services" "unifi" "dataDir" ] "You should move contents of dataDir to /var/lib/unifi/data" )
];
meta.maintainers = with lib.maintainers; [ erictapen pennae ];
}

View file

@ -38,7 +38,7 @@ in
};
# Mount the vmblock for drag-and-drop and copy-and-paste.
systemd.mounts = [
systemd.mounts = mkIf (!cfg.headless) [
{
description = "VMware vmblock fuse mount";
documentation = [ "https://github.com/vmware/open-vm-tools/blob/master/open-vm-tools/vmblock-fuse/design.txt" ];
@ -52,8 +52,8 @@ in
}
];
security.wrappers.vmware-user-suid-wrapper =
{ setuid = true;
security.wrappers.vmware-user-suid-wrapper = mkIf (!cfg.headless) {
setuid = true;
owner = "root";
group = "root";
source = "${open-vm-tools}/bin/vmware-user-suid-wrapper";

View file

@ -119,7 +119,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
with subtest("Stop a container early"):
machine.succeed(f"nixos-container stop {id1}")
machine.succeed(f"nixos-container start {id1} &")
machine.succeed(f"nixos-container start {id1} >&2 &")
machine.wait_for_console_text("Stage 2")
machine.succeed(f"nixos-container stop {id1}")
machine.wait_for_console_text(f"Container {id1} exited successfully")

View file

@ -38,7 +38,7 @@ in {
sender.execute("echo Hello World > testfile01.txt")
sender.execute("echo Hello Earth > testfile02.txt")
sender.execute(
"croc --pass ${pass} --relay relay send --code topSecret testfile01.txt testfile02.txt &"
"croc --pass ${pass} --relay relay send --code topSecret testfile01.txt testfile02.txt >&2 &"
)
# receive the testfiles and check them

View file

@ -5,7 +5,7 @@ import ./make-test-python.nix ({ pkgs, ...} : {
};
nodes = {
simple2 = {
simple = {
services.deluge = {
enable = true;
package = pkgs.deluge-2_x;
@ -16,7 +16,7 @@ import ./make-test-python.nix ({ pkgs, ...} : {
};
};
declarative2 = {
declarative = {
services.deluge = {
enable = true;
package = pkgs.deluge-2_x;
@ -45,27 +45,16 @@ import ./make-test-python.nix ({ pkgs, ...} : {
testScript = ''
start_all()
simple1.wait_for_unit("deluged")
simple2.wait_for_unit("deluged")
simple1.wait_for_unit("delugeweb")
simple2.wait_for_unit("delugeweb")
simple1.wait_for_open_port("8112")
simple2.wait_for_open_port("8112")
declarative1.wait_for_unit("network.target")
declarative2.wait_for_unit("network.target")
declarative1.wait_until_succeeds("curl --fail http://simple1:8112")
declarative2.wait_until_succeeds("curl --fail http://simple2:8112")
simple.wait_for_unit("deluged")
simple.wait_for_unit("delugeweb")
simple.wait_for_open_port("8112")
declarative.wait_for_unit("network.target")
declarative.wait_until_succeeds("curl --fail http://simple:8112")
declarative1.wait_for_unit("deluged")
declarative2.wait_for_unit("deluged")
declarative1.wait_for_unit("delugeweb")
declarative2.wait_for_unit("delugeweb")
declarative1.wait_until_succeeds("curl --fail http://declarative1:3142")
declarative2.wait_until_succeeds("curl --fail http://declarative2:3142")
declarative1.succeed(
"deluge-console 'connect 127.0.0.1:58846 andrew password; help' | grep -q 'rm.*Remove a torrent'"
)
declarative2.succeed(
declarative.wait_for_unit("deluged")
declarative.wait_for_unit("delugeweb")
declarative.wait_until_succeeds("curl --fail http://declarative:3142")
declarative.succeed(
"deluge-console 'connect 127.0.0.1:58846 andrew password; help' | grep -q 'rm.*Remove a torrent'"
)
'';

View file

@ -33,7 +33,7 @@ import ./make-test-python.nix ({ pkgs, ...} : {
)
# connects to the daemon
machine.succeed("emacsclient --create-frame $EDITOR &")
machine.succeed("emacsclient --create-frame $EDITOR >&2 &")
# checks that Emacs shows the edited filename
machine.wait_for_text("emacseditor")

View file

@ -88,7 +88,7 @@ import ./make-test-python.nix ({ pkgs, ...} :
machine.screenshot("wizard12")
with subtest("Run Terminology"):
machine.succeed("terminology &")
machine.succeed("terminology >&2 &")
machine.sleep(5)
machine.send_chars("ls --color -alF\n")
machine.sleep(2)

View file

@ -13,7 +13,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
''
machine.wait_for_unit("multi-user.target")
machine.succeed("etesync-dav --version")
machine.execute("etesync-dav &")
machine.execute("etesync-dav >&2 &")
machine.wait_for_open_port(37358)
with subtest("Check that the web interface is accessible"):
assert "Add User" in machine.succeed("curl -s http://localhost:37358/.web/add/")

View file

@ -91,7 +91,7 @@ import ./make-test-python.nix ({ pkgs, firefoxPackage, ... }: {
with subtest("Wait until Firefox has finished loading the Valgrind docs page"):
machine.execute(
"xterm -e 'firefox file://${pkgs.valgrind.doc}/share/doc/valgrind/html/index.html' &"
"xterm -e 'firefox file://${pkgs.valgrind.doc}/share/doc/valgrind/html/index.html' >&2 &"
)
machine.wait_for_window("Valgrind")
machine.sleep(40)
@ -99,7 +99,7 @@ import ./make-test-python.nix ({ pkgs, firefoxPackage, ... }: {
with subtest("Check whether Firefox can play sound"):
with audio_recording(machine):
machine.succeed(
"firefox file://${pkgs.sound-theme-freedesktop}/share/sounds/freedesktop/stereo/phone-incoming-call.oga &"
"firefox file://${pkgs.sound-theme-freedesktop}/share/sounds/freedesktop/stereo/phone-incoming-call.oga >&2 &"
)
wait_for_sound(machine)
machine.copy_from_vm("/tmp/record.wav")

View file

@ -22,7 +22,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
# Add a dummy sound card, or the program won't start
machine.execute("modprobe snd-dummy")
machine.execute("ft2-clone &")
machine.execute("ft2-clone >&2 &")
machine.wait_for_window(r"Fasttracker")
machine.sleep(5)

View file

@ -1,70 +1,230 @@
# This test is very comprehensive. It tests whether all hadoop services work well with each other.
# Run this when updating the Hadoop package or making significant changes to the hadoop module.
# For a more basic test, see hdfs.nix and yarn.nix
import ../make-test-python.nix ({pkgs, ...}: {
nodes = let
package = pkgs.hadoop;
coreSite = {
"fs.defaultFS" = "hdfs://master";
"fs.defaultFS" = "hdfs://ns1";
};
hdfsSite = {
"dfs.namenode.rpc-bind-host" = "0.0.0.0";
"dfs.namenode.http-bind-host" = "0.0.0.0";
"dfs.namenode.servicerpc-bind-host" = "0.0.0.0";
# HA Quorum Journal Manager configuration
"dfs.nameservices" = "ns1";
"dfs.ha.namenodes.ns1" = "nn1,nn2";
"dfs.namenode.shared.edits.dir.ns1.nn1" = "qjournal://jn1:8485;jn2:8485;jn3:8485/ns1";
"dfs.namenode.shared.edits.dir.ns1.nn2" = "qjournal://jn1:8485;jn2:8485;jn3:8485/ns1";
"dfs.namenode.rpc-address.ns1.nn1" = "nn1:8020";
"dfs.namenode.rpc-address.ns1.nn2" = "nn2:8020";
"dfs.namenode.servicerpc-address.ns1.nn1" = "nn1:8022";
"dfs.namenode.servicerpc-address.ns1.nn2" = "nn2:8022";
"dfs.namenode.http-address.ns1.nn1" = "nn1:9870";
"dfs.namenode.http-address.ns1.nn2" = "nn2:9870";
# Automatic failover configuration
"dfs.client.failover.proxy.provider.ns1" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider";
"dfs.ha.automatic-failover.enabled.ns1" = "true";
"dfs.ha.fencing.methods" = "shell(true)";
"ha.zookeeper.quorum" = "zk1:2181";
};
yarnSiteHA = {
"yarn.resourcemanager.zk-address" = "zk1:2181";
"yarn.resourcemanager.ha.enabled" = "true";
"yarn.resourcemanager.ha.rm-ids" = "rm1,rm2";
"yarn.resourcemanager.hostname.rm1" = "rm1";
"yarn.resourcemanager.hostname.rm2" = "rm2";
"yarn.resourcemanager.ha.automatic-failover.enabled" = "true";
"yarn.resourcemanager.cluster-id" = "cluster1";
# yarn.resourcemanager.webapp.address needs to be defined even though yarn.resourcemanager.hostname is set. This shouldn't be necessary, but there's a bug in
# hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/amfilter/AmFilterInitializer.java:70
# that causes AM containers to fail otherwise.
"yarn.resourcemanager.webapp.address.rm1" = "rm1:8088";
"yarn.resourcemanager.webapp.address.rm2" = "rm2:8088";
};
in {
master = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite;
hdfs.namenode.enabled = true;
yarn.resourcemanager.enabled = true;
};
virtualisation.memorySize = 1024;
zk1 = { ... }: {
services.zookeeper.enable = true;
networking.firewall.allowedTCPPorts = [ 2181 ];
};
worker = {pkgs, options, ...}: {
# HDFS cluster
nn1 = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite;
hdfs.datanode.enabled = true;
yarn.nodemanager.enabled = true;
yarnSite = options.services.hadoop.yarnSite.default // {
"yarn.resourcemanager.hostname" = "master";
};
inherit package coreSite hdfsSite;
hdfs.namenode.enable = true;
hdfs.zkfc.enable = true;
};
};
nn2 = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite hdfsSite;
hdfs.namenode.enable = true;
hdfs.zkfc.enable = true;
};
};
jn1 = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite hdfsSite;
hdfs.journalnode.enable = true;
};
};
jn2 = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite hdfsSite;
hdfs.journalnode.enable = true;
};
};
jn3 = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite hdfsSite;
hdfs.journalnode.enable = true;
};
};
dn1 = {pkgs, options, ...}: {
services.hadoop = {
inherit package coreSite hdfsSite;
hdfs.datanode.enable = true;
};
};
# YARN cluster
rm1 = {pkgs, options, ...}: {
virtualisation.memorySize = 1024;
services.hadoop = {
inherit package coreSite hdfsSite;
yarnSite = options.services.hadoop.yarnSite.default // yarnSiteHA;
yarn.resourcemanager.enable = true;
};
};
rm2 = {pkgs, options, ...}: {
virtualisation.memorySize = 1024;
services.hadoop = {
inherit package coreSite hdfsSite;
yarnSite = options.services.hadoop.yarnSite.default // yarnSiteHA;
yarn.resourcemanager.enable = true;
};
};
nm1 = {pkgs, options, ...}: {
virtualisation.memorySize = 2048;
services.hadoop = {
inherit package coreSite hdfsSite;
yarnSite = options.services.hadoop.yarnSite.default // yarnSiteHA;
yarn.nodemanager.enable = true;
};
};
};
testScript = ''
start_all()
master.wait_for_unit("network.target")
master.wait_for_unit("hdfs-namenode")
#### HDFS tests ####
master.wait_for_open_port(8020)
master.wait_for_open_port(9870)
zk1.wait_for_unit("network.target")
jn1.wait_for_unit("network.target")
jn2.wait_for_unit("network.target")
jn3.wait_for_unit("network.target")
nn1.wait_for_unit("network.target")
nn2.wait_for_unit("network.target")
dn1.wait_for_unit("network.target")
worker.wait_for_unit("network.target")
worker.wait_for_unit("hdfs-datanode")
worker.wait_for_open_port(9864)
worker.wait_for_open_port(9866)
worker.wait_for_open_port(9867)
zk1.wait_for_unit("zookeeper")
jn1.wait_for_unit("hdfs-journalnode")
jn2.wait_for_unit("hdfs-journalnode")
jn3.wait_for_unit("hdfs-journalnode")
master.succeed("curl -f http://worker:9864")
worker.succeed("curl -f http://master:9870")
zk1.wait_for_open_port(2181)
jn1.wait_for_open_port(8480)
jn1.wait_for_open_port(8485)
jn2.wait_for_open_port(8480)
jn2.wait_for_open_port(8485)
worker.succeed("sudo -u hdfs hdfs dfsadmin -safemode wait")
# Namenodes must be stopped before initializing the cluster
nn1.succeed("systemctl stop hdfs-namenode")
nn2.succeed("systemctl stop hdfs-namenode")
nn1.succeed("systemctl stop hdfs-zkfc")
nn2.succeed("systemctl stop hdfs-zkfc")
master.wait_for_unit("yarn-resourcemanager")
# Initialize zookeeper for failover controller
nn1.succeed("sudo -u hdfs hdfs zkfc -formatZK 2>&1 | systemd-cat")
master.wait_for_open_port(8030)
master.wait_for_open_port(8031)
master.wait_for_open_port(8032)
master.wait_for_open_port(8088)
worker.succeed("curl -f http://master:8088")
# Format NN1 and start it
nn1.succeed("sudo -u hdfs hadoop namenode -format 2>&1 | systemd-cat")
nn1.succeed("systemctl start hdfs-namenode")
nn1.wait_for_open_port(9870)
nn1.wait_for_open_port(8022)
nn1.wait_for_open_port(8020)
worker.wait_for_unit("yarn-nodemanager")
worker.wait_for_open_port(8042)
worker.wait_for_open_port(8040)
master.succeed("curl -f http://worker:8042")
# Bootstrap NN2 from NN1 and start it
nn2.succeed("sudo -u hdfs hdfs namenode -bootstrapStandby 2>&1 | systemd-cat")
nn2.succeed("systemctl start hdfs-namenode")
nn2.wait_for_open_port(9870)
nn2.wait_for_open_port(8022)
nn2.wait_for_open_port(8020)
nn1.succeed("netstat -tulpne | systemd-cat")
assert "Total Nodes:1" in worker.succeed("yarn node -list")
# Start failover controllers
nn1.succeed("systemctl start hdfs-zkfc")
nn2.succeed("systemctl start hdfs-zkfc")
assert "Estimated value of Pi is" in worker.succeed("HADOOP_USER_NAME=hdfs yarn jar $(readlink $(which yarn) | sed -r 's~bin/yarn~lib/hadoop-*/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar~g') pi 2 10")
assert "SUCCEEDED" in worker.succeed("yarn application -list -appStates FINISHED")
worker.succeed("sudo -u hdfs hdfs dfs -ls / | systemd-cat")
# DN should have started by now, but confirm anyway
dn1.wait_for_unit("hdfs-datanode")
# Print states of namenodes
dn1.succeed("sudo -u hdfs hdfs haadmin -getAllServiceState | systemd-cat")
# Wait for cluster to exit safemode
dn1.succeed("sudo -u hdfs hdfs dfsadmin -safemode wait")
dn1.succeed("sudo -u hdfs hdfs haadmin -getAllServiceState | systemd-cat")
# test R/W
dn1.succeed("echo testfilecontents | sudo -u hdfs hdfs dfs -put - /testfile")
assert "testfilecontents" in dn1.succeed("sudo -u hdfs hdfs dfs -cat /testfile")
# Test NN failover
nn1.succeed("systemctl stop hdfs-namenode")
assert "active" in dn1.succeed("sudo -u hdfs hdfs haadmin -getAllServiceState")
dn1.succeed("sudo -u hdfs hdfs haadmin -getAllServiceState | systemd-cat")
assert "testfilecontents" in dn1.succeed("sudo -u hdfs hdfs dfs -cat /testfile")
nn1.succeed("systemctl start hdfs-namenode")
nn1.wait_for_open_port(9870)
nn1.wait_for_open_port(8022)
nn1.wait_for_open_port(8020)
assert "standby" in dn1.succeed("sudo -u hdfs hdfs haadmin -getAllServiceState")
dn1.succeed("sudo -u hdfs hdfs haadmin -getAllServiceState | systemd-cat")
#### YARN tests ####
rm1.wait_for_unit("network.target")
rm2.wait_for_unit("network.target")
nm1.wait_for_unit("network.target")
rm1.wait_for_unit("yarn-resourcemanager")
rm1.wait_for_open_port(8088)
rm2.wait_for_unit("yarn-resourcemanager")
rm2.wait_for_open_port(8088)
nm1.wait_for_unit("yarn-nodemanager")
nm1.wait_for_open_port(8042)
nm1.wait_for_open_port(8040)
nm1.wait_until_succeeds("yarn node -list | grep Nodes:1")
nm1.succeed("sudo -u yarn yarn rmadmin -getAllServiceState | systemd-cat")
nm1.succeed("sudo -u yarn yarn node -list | systemd-cat")
# Test RM failover
rm1.succeed("systemctl stop yarn-resourcemanager")
assert "standby" not in nm1.succeed("sudo -u yarn yarn rmadmin -getAllServiceState")
nm1.succeed("sudo -u yarn yarn rmadmin -getAllServiceState | systemd-cat")
rm1.succeed("systemctl start yarn-resourcemanager")
rm1.wait_for_unit("yarn-resourcemanager")
rm1.wait_for_open_port(8088)
assert "standby" in nm1.succeed("sudo -u yarn yarn rmadmin -getAllServiceState")
nm1.succeed("sudo -u yarn yarn rmadmin -getAllServiceState | systemd-cat")
assert "Estimated value of Pi is" in nm1.succeed("HADOOP_USER_NAME=hdfs yarn jar $(readlink $(which yarn) | sed -r 's~bin/yarn~lib/hadoop-*/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar~g') pi 2 10")
assert "SUCCEEDED" in nm1.succeed("yarn application -list -appStates FINISHED")
'';
})
})

View file

@ -1,36 +1,34 @@
# Test a minimal HDFS cluster with no HA
import ../make-test-python.nix ({...}: {
nodes = {
namenode = {pkgs, ...}: {
virtualisation.memorySize = 1024;
services.hadoop = {
package = pkgs.hadoop;
hdfs.namenode.enabled = true;
hdfs = {
namenode = {
enable = true;
formatOnInit = true;
};
httpfs.enable = true;
};
coreSite = {
"fs.defaultFS" = "hdfs://namenode:8020";
};
hdfsSite = {
"dfs.replication" = 1;
"dfs.namenode.rpc-bind-host" = "0.0.0.0";
"dfs.namenode.http-bind-host" = "0.0.0.0";
"hadoop.proxyuser.httpfs.groups" = "*";
"hadoop.proxyuser.httpfs.hosts" = "*";
};
};
networking.firewall.allowedTCPPorts = [
9870 # namenode.http-address
8020 # namenode.rpc-address
];
};
datanode = {pkgs, ...}: {
services.hadoop = {
package = pkgs.hadoop;
hdfs.datanode.enabled = true;
hdfs.datanode.enable = true;
coreSite = {
"fs.defaultFS" = "hdfs://namenode:8020";
"hadoop.proxyuser.httpfs.groups" = "*";
"hadoop.proxyuser.httpfs.hosts" = "*";
};
};
networking.firewall.allowedTCPPorts = [
9864 # datanode.http.address
9866 # datanode.address
9867 # datanode.ipc.address
];
};
};
@ -50,5 +48,13 @@ import ../make-test-python.nix ({...}: {
namenode.succeed("curl -f http://namenode:9870")
datanode.succeed("curl -f http://datanode:9864")
datanode.succeed("sudo -u hdfs hdfs dfsadmin -safemode wait")
datanode.succeed("echo testfilecontents | sudo -u hdfs hdfs dfs -put - /testfile")
assert "testfilecontents" in datanode.succeed("sudo -u hdfs hdfs dfs -cat /testfile")
namenode.wait_for_unit("hdfs-httpfs")
namenode.wait_for_open_port(14000)
assert "testfilecontents" in datanode.succeed("curl -f \"http://namenode:14000/webhdfs/v1/testfile?user.name=hdfs&op=OPEN\" 2>&1")
'';
})

View file

@ -1,28 +1,20 @@
# This only tests if YARN is able to start its services
import ../make-test-python.nix ({...}: {
nodes = {
resourcemanager = {pkgs, ...}: {
services.hadoop.package = pkgs.hadoop;
services.hadoop.yarn.resourcemanager.enabled = true;
services.hadoop.yarn.resourcemanager.enable = true;
services.hadoop.yarnSite = {
"yarn.resourcemanager.scheduler.class" = "org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler";
};
networking.firewall.allowedTCPPorts = [
8088 # resourcemanager.webapp.address
8031 # resourcemanager.resource-tracker.address
];
};
nodemanager = {pkgs, ...}: {
services.hadoop.package = pkgs.hadoop;
services.hadoop.yarn.nodemanager.enabled = true;
services.hadoop.yarn.nodemanager.enable = true;
services.hadoop.yarnSite = {
"yarn.resourcemanager.hostname" = "resourcemanager";
"yarn.nodemanager.log-dirs" = "/tmp/userlogs";
"yarn.nodemanager.address" = "0.0.0.0:8041";
};
networking.firewall.allowedTCPPorts = [
8042 # nodemanager.webapp.address
8041 # nodemanager.address
];
};
};
@ -38,7 +30,6 @@ import ../make-test-python.nix ({...}: {
nodemanager.wait_for_unit("yarn-nodemanager")
nodemanager.wait_for_unit("network.target")
nodemanager.wait_for_open_port(8042)
nodemanager.wait_for_open_port(8041)
resourcemanager.succeed("curl -f http://localhost:8088")
nodemanager.succeed("curl -f http://localhost:8042")

View file

@ -110,7 +110,7 @@ in makeTest {
)
# Hibernate machine
hibernate.execute("systemctl hibernate &", check_return=False)
hibernate.execute("systemctl hibernate >&2 &", check_return=False)
hibernate.wait_for_shutdown()
# Restore machine from hibernation, validate our ramfs file is there.

View file

@ -26,7 +26,7 @@ import ./make-test-python.nix ({ pkgs, ...} :
machine.wait_for_x()
# start KeePassXC window
machine.execute("su - alice -c keepassxc &")
machine.execute("su - alice -c keepassxc >&2 &")
machine.wait_for_text("KeePassXC ${pkgs.keepassxc.version}")
machine.screenshot("KeePassXC")

View file

@ -18,7 +18,7 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : {
testScript =
''
machine.wait_for_unit("multi-user.target")
machine.execute("systemctl kexec &", check_return=False)
machine.execute("systemctl kexec >&2 &", check_return=False)
machine.connected = False
machine.wait_for_unit("multi-user.target")
'';

View file

@ -46,7 +46,7 @@ let
# set up process that expects all the keys to be entered
machine.succeed(
"{} {} {} {} &".format(
"{} {} {} {} >&2 &".format(
cmd,
"${testReader}",
len(inputs),

View file

@ -89,7 +89,7 @@ in
"""
Sends a message as Alice to Bob
"""
bob.execute("nc -lu ::0 1234 >/tmp/msg &")
bob.execute("nc -lu ::0 1234 >/tmp/msg >&2 &")
alice.sleep(1)
alice.succeed(f"echo '{msg}' | nc -uw 0 bob 1234")
bob.succeed(f"grep '{msg}' /tmp/msg")
@ -100,7 +100,7 @@ in
Starts eavesdropping on Alice and Bob
"""
match = "src host alice and dst host bob"
eve.execute(f"tcpdump -i br0 -c 1 -Avv {match} >/tmp/log &")
eve.execute(f"tcpdump -i br0 -c 1 -Avv {match} >/tmp/log >&2 &")
start_all()
@ -120,7 +120,7 @@ in
alice.succeed("ipsec verify 1>&2")
with subtest("Alice and Bob can start the tunnel"):
alice.execute("ipsec auto --start tunnel &")
alice.execute("ipsec auto --start tunnel >&2 &")
bob.succeed("ipsec auto --start tunnel")
# apparently this is needed to "wake" the tunnel
bob.execute("ping -c1 alice")

View file

@ -14,7 +14,7 @@ import ../make-test-python.nix {
)
# Start the daemon and wait until it is ready
machine.execute("lorri daemon > lorri.stdout 2> lorri.stderr &")
machine.execute("lorri daemon > lorri.stdout 2> lorri.stderr >&2 &")
machine.wait_until_succeeds("grep --fixed-strings 'ready' lorri.stdout")
# Ping the daemon

View file

@ -29,7 +29,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
# Create a secret file and send it to Bob
client_alice.succeed("echo mysecret > secretfile")
client_alice.succeed("wormhole --relay-url=ws://server:4000/v1 send -0 secretfile &")
client_alice.succeed("wormhole --relay-url=ws://server:4000/v1 send -0 secretfile >&2 &")
# Retrieve a secret file from Alice and check its content
client_bob.succeed("wormhole --relay-url=ws://server:4000/v1 receive -0 --accept-file")

View file

@ -25,7 +25,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
"bind_address" = "";
"port" = 8448;
"resources" = [
{ "compress" = true; "names" = [ "client" "webclient" ]; }
{ "compress" = true; "names" = [ "client" ]; }
{ "compress" = false; "names" = [ "federation" ]; }
];
"tls" = false;
@ -85,52 +85,108 @@ import ./make-test-python.nix ({ pkgs, ... }:
client = { pkgs, ... }: {
environment.systemPackages = [
(pkgs.writers.writePython3Bin "do_test"
{ libraries = [ pkgs.python3Packages.matrix-client ]; } ''
import socket
from matrix_client.client import MatrixClient
from time import sleep
{
libraries = [ pkgs.python3Packages.matrix-nio ];
flakeIgnore = [
# We don't live in the dark ages anymore.
# Languages like Python that are whitespace heavy will overrun
# 79 characters..
"E501"
];
} ''
import sys
import socket
import functools
from time import sleep
import asyncio
matrix = MatrixClient("${homeserverUrl}")
matrix.register_with_password(username="alice", password="foobar")
irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
irc.connect(("ircd", 6667))
irc.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
irc.send(b"USER bob bob bob :bob\n")
irc.send(b"NICK bob\n")
m_room = matrix.join_room("#irc_#test:homeserver")
irc.send(b"JOIN #test\n")
# plenty of time for the joins to happen
sleep(10)
m_room.send_text("hi from matrix")
irc.send(b"PRIVMSG #test :hi from irc \r\n")
print("Waiting for irc message...")
while True:
buf = irc.recv(10000)
if b"hi from matrix" in buf:
break
print("Waiting for matrix message...")
from nio import AsyncClient, RoomMessageText, JoinResponse
def callback(room, e):
if "hi from irc" in e['content']['body']:
exit(0)
async def matrix_room_message_text_callback(matrix: AsyncClient, msg: str, _r, e):
print("Received matrix text message: ", e)
if msg in e.body:
print("Received hi from IRC")
await matrix.close()
exit(0) # Actual exit point
m_room.add_listener(callback, "m.room.message")
matrix.listen_forever()
''
class IRC:
def __init__(self):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("ircd", 6667))
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
sock.send(b"USER bob bob bob :bob\n")
sock.send(b"NICK bob\n")
self.sock = sock
def join(self, room: str):
self.sock.send(f"JOIN {room}\n".encode())
def privmsg(self, room: str, msg: str):
self.sock.send(f"PRIVMSG {room} :{msg}\n".encode())
def expect_msg(self, body: str):
buffer = ""
while True:
buf = self.sock.recv(1024).decode()
buffer += buf
if body in buffer:
return
async def run(homeserver: str):
irc = IRC()
matrix = AsyncClient(homeserver)
response = await matrix.register("alice", "foobar")
print("Matrix register response: ", response)
response = await matrix.join("#irc_#test:homeserver")
print("Matrix join room response:", response)
assert isinstance(response, JoinResponse)
room_id = response.room_id
irc.join("#test")
# FIXME: what are we waiting on here? Matrix? IRC? Both?
# 10s seem bad for busy hydra machines.
sleep(10)
# Exchange messages
print("Sending text message to matrix room")
response = await matrix.room_send(
room_id=room_id,
message_type="m.room.message",
content={"msgtype": "m.text", "body": "hi from matrix"},
)
print("Matrix room send response: ", response)
irc.privmsg("#test", "hi from irc")
print("Waiting for the matrix message to appear on the IRC side...")
irc.expect_msg("hi from matrix")
callback = functools.partial(
matrix_room_message_text_callback, matrix, "hi from irc"
)
matrix.add_event_callback(callback, RoomMessageText)
print("Waiting for matrix message...")
await matrix.sync_forever()
exit(1) # Unreachable
if __name__ == "__main__":
asyncio.run(run(sys.argv[1]))
''
)
];
};
};
testScript = ''
import pathlib
start_all()
ircd.wait_for_unit("ngircd.service")
@ -156,7 +212,6 @@ import ./make-test-python.nix ({ pkgs, ... }:
homeserver.wait_for_open_port(8448)
with subtest("ensure messages can be exchanged"):
client.succeed("do_test")
client.succeed("do_test ${homeserverUrl} >&2")
'';
})

View file

@ -20,7 +20,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
let user = nodes.client.config.users.users.alice;
in ''
client.wait_for_x()
client.execute("su - alice -c minecraft-launcher &")
client.execute("su - alice -c minecraft-launcher >&2 &")
client.wait_for_text("Create a new Microsoft account")
client.sleep(10)
client.screenshot("launcher")

View file

@ -21,7 +21,7 @@ in
};
testScript = ''
machine.execute("set -m; mpv --script-opts=webui-port=${port} --idle=yes &")
machine.execute("set -m; mpv --script-opts=webui-port=${port} --idle=yes >&2 &")
machine.wait_for_open_port(${port})
assert "<title>simple-mpv-webui" in machine.succeed("curl -s localhost:${port}")
'';

View file

@ -38,8 +38,8 @@ in
client1.wait_for_x()
client2.wait_for_x()
client1.execute("mumble mumble://client1:testpassword\@server/test &")
client2.execute("mumble mumble://client2:testpassword\@server/test &")
client1.execute("mumble mumble://client1:testpassword\@server/test >&2 &")
client2.execute("mumble mumble://client2:testpassword\@server/test >&2 &")
# cancel client audio configuration
client1.wait_for_window(r"Audio Tuning Wizard")

View file

@ -44,7 +44,7 @@ in
)
# Start MuseScore window
machine.execute("DISPLAY=:0.0 mscore &")
machine.execute("DISPLAY=:0.0 mscore >&2 &")
# Wait until MuseScore has launched
machine.wait_for_window("MuseScore")

View file

@ -66,7 +66,7 @@ in
client2.succeed("time flock -n -s /data/lock true")
with subtest("client 2 fails to acquire lock held by client 1"):
client1.succeed("flock -x /data/lock -c 'touch locked; sleep 100000' &")
client1.succeed("flock -x /data/lock -c 'touch locked; sleep 100000' >&2 &")
client1.wait_for_file("locked")
client2.fail("flock -n -s /data/lock true")

View file

@ -76,7 +76,7 @@ import ./make-test-python.nix {
server.wait_for_unit("nginx.service")
client.wait_for_unit("multi-user.target")
client.execute("test-runner &")
client.execute("test-runner >&2 &")
client.wait_for_file("/tmp/passed_stage1")
server.succeed(

View file

@ -78,7 +78,7 @@ let
# Put newlines on console, to flush the console reader's line buffer
# in case nixops' last output did not end in a newline, as is the case
# with a status line (if implemented?)
deployer.succeed("while sleep 60s; do echo [60s passed] >/dev/console; done &")
deployer.succeed("while sleep 60s; do echo [60s passed]; done >&2 &")
deployer_do("cd ~/unicorn; ssh -oStrictHostKeyChecking=accept-new root@server echo hi")

View file

@ -38,8 +38,8 @@ in {
client1.wait_for_x()
client2.wait_for_x()
client1.execute("openarena +set r_fullscreen 0 +set name Foo +connect server &")
client2.execute("openarena +set r_fullscreen 0 +set name Bar +connect server &")
client1.execute("openarena +set r_fullscreen 0 +set name Foo +connect server >&2 &")
client2.execute("openarena +set r_fullscreen 0 +set name Bar +connect server >&2 &")
server.wait_until_succeeds(
"journalctl -u openarena -e | grep -q 'Foo.*entered the game'"

View file

@ -1,21 +1,42 @@
{ system ? builtins.currentSystem, config ? { }
, pkgs ? import ../.. { inherit system config; } }:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
import ./make-test-python.nix ({ pkgs, ... }: {
name = "owncast";
meta = with pkgs.stdenv.lib.maintainers; { maintainers = [ MayNiklas ]; };
meta = with pkgs.lib.maintainers; { maintainers = [ MayNiklas ]; };
nodes = {
client = { ... }: {
environment.systemPackages = [ curl ];
services.owncast = { enable = true; };
client = { pkgs, ... }: with pkgs.lib; {
networking = {
dhcpcd.enable = false;
interfaces.eth1.ipv6.addresses = mkOverride 0 [ { address = "fd00::2"; prefixLength = 64; } ];
interfaces.eth1.ipv4.addresses = mkOverride 0 [ { address = "192.168.1.2"; prefixLength = 24; } ];
};
};
server = { pkgs, ... }: with pkgs.lib; {
networking = {
dhcpcd.enable = false;
useNetworkd = true;
useDHCP = false;
interfaces.eth1.ipv6.addresses = mkOverride 0 [ { address = "fd00::1"; prefixLength = 64; } ];
interfaces.eth1.ipv4.addresses = mkOverride 0 [ { address = "192.168.1.1"; prefixLength = 24; } ];
firewall.allowedTCPPorts = [ 8080 ];
};
services.owncast = {
enable = true;
listen = "0.0.0.0";
};
};
};
testScript = ''
start_all()
client.wait_for_unit("owncast.service")
client.succeed("curl localhost:8080/api/status")
client.wait_for_unit("network-online.target")
server.wait_for_unit("network-online.target")
server.wait_for_unit("owncast.service")
server.wait_until_succeeds("ss -ntl | grep -q 8080")
client.succeed("curl http://192.168.1.1:8080/api/status")
client.succeed("curl http://[fd00::1]:8080/api/status")
'';
}
})

View file

@ -14,7 +14,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
testScript = ''
machine.wait_for_x()
machine.succeed("gnome-calculator &")
machine.succeed("gnome-calculator >&2 &")
machine.wait_for_window("gnome-calculator")
machine.succeed(
"xdotool search --sync --onlyvisible --class gnome-calculator "

View file

@ -22,7 +22,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
# Add a dummy sound card, or the program won't start
machine.execute("modprobe snd-dummy")
machine.execute("pt2-clone &")
machine.execute("pt2-clone >&2 &")
machine.wait_for_window(r"ProTracker")
machine.sleep(5)

View file

@ -19,7 +19,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
testScript =
''
machine.wait_for_x()
machine.execute("shattered-pixel-dungeon &")
machine.execute("shattered-pixel-dungeon >&2 &")
machine.wait_for_window(r"Shattered Pixel Dungeon")
machine.sleep(5)
if "Enter" not in machine.get_screen_text():

View file

@ -41,7 +41,7 @@ in {
machine.wait_for_x()
# start signal desktop
machine.execute("su - alice -c signal-desktop &")
machine.execute("su - alice -c signal-desktop >&2 &")
# Wait for the Signal window to appear. Since usually the tests
# are run sandboxed and therfore with no internet, we can not wait

View file

@ -16,7 +16,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
testScript = ''
machine.wait_for_x()
machine.succeed("soapui &")
machine.succeed("soapui >&2 &")
machine.wait_for_window(r"SoapUI \d+\.\d+\.\d+")
machine.sleep(1)
machine.screenshot("soapui")

View file

@ -35,13 +35,13 @@ makeTest {
for host in [server, client]:
host.succeed("echo foobar | vncpasswd -f > vncpasswd")
server.succeed("Xvnc -geometry 720x576 :1 -PasswordFile vncpasswd &")
server.succeed("Xvnc -geometry 720x576 :1 -PasswordFile vncpasswd >&2 &")
server.wait_until_succeeds("nc -z localhost 5901", timeout=10)
server.succeed("DISPLAY=:1 xwininfo -root | grep 720x576")
server.execute("DISPLAY=:1 display -size 360x200 -font sans -gravity south label:'HELLO VNC WORLD' &")
server.execute("DISPLAY=:1 display -size 360x200 -font sans -gravity south label:'HELLO VNC WORLD' >&2 &")
client.wait_for_x()
client.execute("vncviewer server:1 -PasswordFile vncpasswd &")
client.execute("vncviewer server:1 -PasswordFile vncpasswd >&2 &")
client.wait_for_window(r"VNC")
client.screenshot("screenshot")
text = client.get_screen_text()

View file

@ -97,7 +97,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
)
machine.execute(
# Note trailing & for backgrounding.
f"({xvnc_command} | tee /tmp/Xvnc.stdout) 3>&1 1>&2 2>&3 | tee /tmp/Xvnc.stderr &",
f"({xvnc_command} | tee /tmp/Xvnc.stdout) 3>&1 1>&2 2>&3 | tee /tmp/Xvnc.stderr >&2 &",
)
@ -119,7 +119,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
def test_glxgears_failing_with_bad_driver_path():
machine.execute(
# Note trailing & for backgrounding.
"(env DISPLAY=:0 LIBGL_DRIVERS_PATH=/nonexistent glxgears -info | tee /tmp/glxgears-should-fail.stdout) 3>&1 1>&2 2>&3 | tee /tmp/glxgears-should-fail.stderr &"
"(env DISPLAY=:0 LIBGL_DRIVERS_PATH=/nonexistent glxgears -info | tee /tmp/glxgears-should-fail.stdout) 3>&1 1>&2 2>&3 | tee /tmp/glxgears-should-fail.stderr >&2 &"
)
machine.wait_until_succeeds("test -f /tmp/glxgears-should-fail.stderr")
wait_until_terminated_or_succeeds(
@ -136,7 +136,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
def test_glxgears_prints_renderer():
machine.execute(
# Note trailing & for backgrounding.
"(env DISPLAY=:0 glxgears -info | tee /tmp/glxgears.stdout) 3>&1 1>&2 2>&3 | tee /tmp/glxgears.stderr &"
"(env DISPLAY=:0 glxgears -info | tee /tmp/glxgears.stdout) 3>&1 1>&2 2>&3 | tee /tmp/glxgears.stderr >&2 &"
)
machine.wait_until_succeeds("test -f /tmp/glxgears.stderr")
wait_until_terminated_or_succeeds(

View file

@ -16,7 +16,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
testScript = ''
machine.wait_for_x()
machine.succeed("tuxguitar &")
machine.succeed("tuxguitar >&2 &")
machine.wait_for_window("TuxGuitar - Untitled.tg")
machine.sleep(1)
machine.screenshot("tuxguitar")

View file

@ -430,7 +430,7 @@ in mapAttrs (mkVBoxTest false vboxVMs) {
create_vm_simple()
machine.succeed(ru("VirtualBox &"))
machine.succeed(ru("VirtualBox >&2 &"))
machine.wait_until_succeeds(ru("xprop -name 'Oracle VM VirtualBox Manager'"))
machine.sleep(5)
machine.screenshot("gui_manager_started")

View file

@ -31,7 +31,7 @@ import ./make-test-python.nix ({ pkgs, ...} :
# Start VSCodium with a file that doesn't exist yet
machine.fail("ls /home/alice/foo.txt")
machine.succeed("su - alice -c 'codium foo.txt' &")
machine.succeed("su - alice -c 'codium foo.txt' >&2 &")
# Wait for the window to appear
machine.wait_for_text("VSCodium")

View file

@ -32,13 +32,13 @@ import ./make-test-python.nix ({ pkgs, ...} : {
client.sleep(5)
client.execute("xterm &")
client.execute("xterm >&2 &")
client.sleep(1)
client.send_chars("xfreerdp /cert-tofu /w:640 /h:480 /v:127.0.0.1 /u:${user.name} /p:${user.password}\n")
client.sleep(5)
client.screenshot("localrdp")
client.execute("xterm &")
client.execute("xterm >&2 &")
client.sleep(1)
client.send_chars("xfreerdp /cert-tofu /w:640 /h:480 /v:server /u:${user.name} /p:${user.password}\n")
client.sleep(5)

View file

@ -13,7 +13,7 @@ import ./make-test-python.nix ({ pkgs, ...} : {
testScript =
''
machine.wait_for_x()
machine.succeed("DISPLAY=:0 xterm -title testterm -class testterm -fullscreen &")
machine.succeed("DISPLAY=:0 xterm -title testterm -class testterm -fullscreen >&2 &")
machine.sleep(2)
machine.send_chars("echo $XTERM_VERSION >> /tmp/xterm_version\n")
machine.wait_for_file("/tmp/xterm_version")

View file

@ -6,11 +6,11 @@
stdenv.mkDerivation rec {
pname = "bitwig-studio";
version = "4.0.1";
version = "4.0.7";
src = fetchurl {
url = "https://downloads.bitwig.com/stable/${version}/${pname}-${version}.deb";
sha256 = "sha256-yhCAKlbLjyBywkSYY1aqbUGFlAHBLR8g8xPDIqoUIZk=";
sha256 = "sha256-NAiwHLYhTAQH6xZw5u8bM7MOILcMclQMKtJc7MGJb+Q=";
};
nativeBuildInputs = [ dpkg makeWrapper wrapGAppsHook ];

View file

@ -56,6 +56,6 @@ stdenv.mkDerivation rec {
homepage = "https://tonelib.net/";
license = licenses.unfree;
maintainers = with maintainers; [ dan4ik605743 ];
platforms = platforms.linux;
platforms = [ "x86_64-linux" ];
};
}

View file

@ -1,12 +1,18 @@
{ stdenv
, dpkg
, lib
, autoPatchelfHook
{ lib
, stdenv
, fetchurl
, webkitgtk
, libjack2
, autoPatchelfHook
, dpkg
, alsa-lib
, freetype
, libglvnd
, curl
, libXcursor
, libXinerama
, libXrandr
, libXrender
, libjack2
, webkitgtk
}:
stdenv.mkDerivation rec {
@ -18,36 +24,40 @@ stdenv.mkDerivation rec {
sha256 = "sha256-4q2vM0/q7o/FracnO2xxnr27opqfVQoN7fsqTD9Tr/c=";
};
buildInputs = [
dpkg
webkitgtk
libjack2
alsa-lib
];
nativeBuildInputs = [
autoPatchelfHook
dpkg
];
unpackPhase = ''
mkdir -p $TMP/ $out/
dpkg -x $src $TMP
'';
buildInputs = [
stdenv.cc.cc.lib
alsa-lib
freetype
libglvnd
webkitgtk
] ++ runtimeDependencies;
runtimeDependencies = map lib.getLib [
curl
libXcursor
libXinerama
libXrandr
libXrender
libjack2
];
unpackCmd = "dpkg -x $curSrc source";
installPhase = ''
cp -R $TMP/usr/* $out/
mv $out/bin/ToneLib-Zoom $out/bin/tonelib-zoom
mv usr $out
substituteInPlace $out/share/applications/ToneLib-Zoom.desktop --replace /usr/ $out/
'';
runtimeDependencies = [
(lib.getLib curl)
];
meta = with lib; {
description = "ToneLib Zoom change and save all the settings in your Zoom(r) guitar pedal";
homepage = "https://tonelib.net/";
license = licenses.unfree;
maintainers = with maintainers; [ dan4ik605743 ];
platforms = platforms.linux;
platforms = [ "x86_64-linux" ];
};
}

View file

@ -3,13 +3,13 @@
buildDotnetModule rec {
pname = "btcpayserver";
version = "1.3.2";
version = "1.3.3";
src = fetchFromGitHub {
owner = pname;
repo = pname;
rev = "v${version}";
sha256 = "sha256-TAngdQz3FupoqPrqskjSQ9xSDbZV4/6+j7C4NjBFcFw=";
sha256 = "sha256-IBdQlVZx7Bt4y7B7FvHJihHUWO15a89hs+SGwcobDqY=";
};
projectFile = "BTCPayServer/BTCPayServer.csproj";

View file

@ -639,4 +639,99 @@ rec {
};
};
ivyde = buildEclipsePlugin rec {
name = "ivyde-${version}";
version = "2.2.0.final-201311091524-RELEASE";
srcFeature = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivyde-${version}/features/org.apache.ivyde.feature_${version}.jar";
sha1 = "c8fb6c4aab32db13db0bd81c1a148032667fff31";
};
srcPlugin = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivyde-${version}/plugins/org.apache.ivyde.eclipse_${version}.jar";
sha1 = "0c80c2e228a07f18efab1c56ea026448eda70c06";
};
meta = with lib; {
homepage = "https://ant.apache.org/ivy/ivyde/index.html";
description = "A plugin which integrates Apache Ivy's dependency management";
license = licenses.asl20;
platforms = platforms.all;
maintainers = [ maintainers.r3dl3g ];
};
};
ivyderv = buildEclipsePlugin rec {
name = "ivyderv-${version}";
version = "2.2.0.final-201311091524-RELEASE";
srcFeature = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivyde-${version}/features/org.apache.ivyde.eclipse.resolvevisualizer.feature_${version}.jar";
sha1 = "fb1941eaa2c0de54259de01b0da6d5a6b4a2cab1";
};
srcPlugin = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivyde-${version}/plugins/org.apache.ivyde.eclipse.resolvevisualizer_${version}.jar";
sha1 = "225e0c8ccb010d622c159560638578c2fc51a67e";
};
meta = with lib; {
homepage = "https://ant.apache.org/ivy/ivyde/index.html";
description = "A graph viewer of the resolved dependencies.";
longDescription = ''
Apache IvyDE Resolve Visualizer is an optional dependency of Apache IvyDE since
it requires additional plugins to be installed (Zest).
'';
license = licenses.asl20;
platforms = platforms.all;
maintainers = [ maintainers.r3dl3g ];
};
};
ivy = buildEclipsePlugin rec {
name = "ivy-${version}";
version = "2.5.0.final_20191020104435";
srcFeature = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivy-${version}/features/org.apache.ivy.eclipse.ant.feature_${version}.jar";
sha256 = "de6134171a0edf569bb9b4c3a91639d469f196e86804d218adfdd60a5d7fa133";
};
srcPlugin = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivy-${version}/plugins/org.apache.ivy.eclipse.ant_${version}.jar";
sha256 = "9e8ea20480cf73d0f0f3fb032d263c7536b24fd2eef71beb7d62af4e065f9ab5";
};
meta = with lib; {
homepage = "https://ant.apache.org/ivy/index.html";
description = "A popular dependency manager focusing on flexibility and simplicity";
license = licenses.asl20;
platforms = platforms.all;
maintainers = [ maintainers.r3dl3g ];
};
};
ivyant = buildEclipsePlugin rec {
name = "ivyant-${version}";
version = "2.5.0.final_20191020104435";
srcFeature = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivy-${version}/features/org.apache.ivy.eclipse.ant.feature_${version}.jar";
sha256 = "de6134171a0edf569bb9b4c3a91639d469f196e86804d218adfdd60a5d7fa133";
};
srcPlugin = fetchurl {
url = "https://downloads.apache.org/ant/ivyde/updatesite/ivy-${version}/plugins/org.apache.ivy.eclipse.ant_${version}.jar";
sha256 = "9e8ea20480cf73d0f0f3fb032d263c7536b24fd2eef71beb7d62af4e065f9ab5";
};
meta = with lib; {
homepage = "https://ant.apache.org/ivy/ivyde/index.html";
description = "Ant Tasks integrated into Eclipse's Ant runtime";
license = licenses.asl20;
platforms = platforms.all;
maintainers = [ maintainers.r3dl3g ];
};
};
}

View file

@ -1,4 +1,4 @@
{ lib, fetchFromGitHub, python3 }:
{ lib, fetchFromGitHub, python3, makeDesktopItem, copyDesktopItems }:
with python3.pkgs;
@ -13,6 +13,17 @@ buildPythonApplication rec {
sha256 = "13l8blq7y6p7a235x2lfiqml1bd4ba2brm3vfvs8wasjh3fvm9g5";
};
nativeBuildInputs = [ copyDesktopItems ];
desktopItems = [ (makeDesktopItem {
name = "Thonny";
exec = "thonny";
icon = "thonny";
desktopName = "Thonny";
comment = "Python IDE for beginners";
categories = "Development;IDE";
}) ];
propagatedBuildInputs = with python3.pkgs; [
jedi
pyserial
@ -34,6 +45,10 @@ buildPythonApplication rec {
--prefix PYTHONPATH : $PYTHONPATH:$(toPythonPath ${python3.pkgs.jedi})
'';
postInstall = ''
install -Dm644 ./packaging/icons/thonny-48x48.png $out/share/icons/hicolor/48x48/apps/thonny.png
'';
# Tests need a DISPLAY
doCheck = false;

View file

@ -1,9 +1,12 @@
{ stdenv
, lib
, fetchurl
, copyDesktopItems
, makeDesktopItem
, makeWrapper
, libuuid
, libunwind
, libxkbcommon
, icu
, openssl
, zlib
@ -13,23 +16,69 @@
, gnutar
, atomEnv
, libkrb5
, libdrm
, mesa
, xorg
}:
# from justinwoo/azuredatastudio-nix
# https://github.com/justinwoo/azuredatastudio-nix/blob/537c48aa3981cd1a82d5d6e508ab7e7393b3d7c8/default.nix
let
desktopItem = makeDesktopItem {
name = "azuredatastudio";
desktopName = "Azure Data Studio";
comment = "Data Management Tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux.";
genericName = "Text Editor";
exec = "azuredatastudio --no-sandbox --unity-launch %F";
icon = "azuredatastudio";
startupNotify = "true";
categories = "Utility;TextEditor;Development;IDE;";
mimeType = "text/plain;inode/directory;application/x-azuredatastudio-workspace;";
extraEntries = ''
StartupWMClass=azuredatastudio
Actions=new-empty-window;
Keywords=azuredatastudio;
[Desktop Action new-empty-window]
Name=New Empty Window
Exec=azuredatastudio --no-sandbox --new-window %F
Icon=azuredatastudio
'';
};
urlHandlerDesktopItem = makeDesktopItem {
name = "azuredatastudio-url-handler";
desktopName = "Azure Data Studio - URL Handler";
comment = "Azure Data Studio";
genericName = "Text Editor";
exec = "azuredatastudio --no-sandbox --open-url %U";
icon = "azuredatastudio";
startupNotify = "true";
categories = "Utility;TextEditor;Development;IDE;";
mimeType = "x-scheme-handler/azuredatastudio;";
extraEntries = ''
NoDisplay=true
Keywords=azuredatastudio;
'';
};
in
stdenv.mkDerivation rec {
pname = "azuredatastudio";
version = "1.17.1";
version = "1.33.0";
desktopItems = [ desktopItem urlHandlerDesktopItem ];
src = fetchurl {
url = "https://azuredatastudiobuilds.blob.core.windows.net/releases/${version}/azuredatastudio-linux-${version}.tar.gz";
sha256 = "0px9n9vyjvyddca4x7d0zindd0dim7350vkjg5dd0506fm8dc38k";
name = "${pname}-${version}.tar.gz";
url = "https://azuredatastudio-update.azurewebsites.net/${version}/linux-x64/stable";
sha256 = "0593xs44ryfyxy0hc31hdbj706q16h58jb0qyfyncn7ngybm3423";
};
nativeBuildInputs = [
makeWrapper
copyDesktopItems
];
buildInputs = [
@ -38,7 +87,14 @@ stdenv.mkDerivation rec {
at-spi2-atk
];
dontInstall = true;
installPhase = ''
runHook preInstall
mkdir -p $out/share/pixmaps
cp ${targetPath}/resources/app/resources/linux/code.png $out/share/pixmaps/azuredatastudio.png
runHook postInstall
'';
# change this to azuredatastudio-insiders for insiders releases
edition = "azuredatastudio";
@ -60,7 +116,7 @@ stdenv.mkDerivation rec {
];
# this will most likely need to be updated when azuredatastudio's version changes
sqltoolsservicePath = "${targetPath}/resources/app/extensions/mssql/sqltoolsservice/Linux/2.0.0-release.56";
sqltoolsservicePath = "${targetPath}/resources/app/extensions/mssql/sqltoolsservice/Linux/3.0.0-release.139";
rpath = lib.concatStringsSep ":" [
atomEnv.libPath
@ -71,6 +127,10 @@ stdenv.mkDerivation rec {
at-spi2-atk
stdenv.cc.cc.lib
libkrb5
libdrm
libxkbcommon
mesa
xorg.libxshmfence
]
)
targetPath
@ -111,5 +171,6 @@ stdenv.mkDerivation rec {
description = "A data management tool that enables working with SQL Server, Azure SQL DB and SQL DW";
homepage = "https://docs.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio";
license = lib.licenses.unfreeRedistributable;
platforms = [ "x86_64-linux" ];
};
}

View file

@ -15,6 +15,7 @@ let
sha256 = "09h1153wgr5x2ny7ds0w2m81n3bb9j8hjb8sjfnrg506r01clkyx";
};
});
click = self.callPackage ../../../development/python-modules/click/7.nix { };
};
};
in

View file

@ -0,0 +1,88 @@
{ lib, stdenv, autoPatchelfHook, makeDesktopItem, copyDesktopItems, wrapGAppsHook, fetchurl
, alsa-lib, at-spi2-atk, at-spi2-core, atk, cairo, cups
, gtk3, nss, glib, dbus, nspr, gdk-pixbuf
, libX11, libXScrnSaver, libXcomposite, libXcursor, libXdamage, libXext
, libXfixes, libXi, libXrandr, libXrender, libXtst, libxcb, pango
, gcc-unwrapped, udev
}:
stdenv.mkDerivation rec {
pname = "snapmaker-luban";
version = "4.0.3";
src = fetchurl {
url = "https://github.com/Snapmaker/Luban/releases/download/v${version}/snapmaker-luban-${version}-linux-x64.tar.gz";
sha256 = "13qk7ssfawjaa5p4mnml4ndzzsqs26qpi76hc9qaipi74ss3jih4";
};
nativeBuildInputs = [
autoPatchelfHook
wrapGAppsHook
copyDesktopItems
];
buildInputs = [
alsa-lib
at-spi2-atk
at-spi2-core
cairo
cups
gcc-unwrapped
gtk3
libXdamage
libX11
libXScrnSaver
libXtst
libxcb
nspr
nss
];
libPath = lib.makeLibraryPath [
stdenv.cc.cc alsa-lib atk at-spi2-atk at-spi2-core cairo cups
gdk-pixbuf glib gtk3 libX11 libXcomposite
libXcursor libXdamage libXext libXfixes libXi libXrandr libXrender
libXtst nspr nss libxcb pango libXScrnSaver udev
];
dontWrapGApps = true;
installPhase = ''
runHook preInstall
mkdir -p $out/{bin,opt,share/pixmaps}/
mv * $out/opt/
patchelf --set-interpreter ${stdenv.cc.bintools.dynamicLinker} \
$out/opt/snapmaker-luban
wrapProgram $out/opt/snapmaker-luban \
"''${gappsWrapperArgs[@]}" \
--prefix XDG_DATA_DIRS : "${gtk3}/share/gsettings-schemas/${gtk3.name}/" \
--prefix LD_LIBRARY_PATH : ${libPath}:$out/snapmaker-luban
ln -s $out/opt/snapmaker-luban $out/bin/snapmaker-luban
ln -s $out/opt/resources/app/app/resources/images/snap-luban-logo-64x64.png $out/share/pixmaps/snapmaker-luban.png
runHook postInstall
'';
desktopItems = [
(makeDesktopItem {
name = pname;
exec = "snapmaker-luban";
icon = "snapmaker-luban";
desktopName = "Snapmaker Luban";
genericName = meta.description;
categories = "Office;Printing;";
})
];
meta = with lib; {
description = "Snapmaker Luban is an easy-to-use 3-in-1 software tailor-made for Snapmaker machines";
homepage = "https://github.com/Snapmaker/Luban";
license = licenses.gpl3;
maintainers = [ maintainers.simonkampe ];
platforms = [ "x86_64-linux" ];
};
}

View file

@ -0,0 +1,90 @@
{ stdenv
, lib
, buildFHSUserEnvBubblewrap
, callPackage
, copyDesktopItems
, dpkg
, lndir
, makeDesktopItem
, makeWrapper
, requireFile
}:
let
version = "7.3.1";
ptFiles = stdenv.mkDerivation {
name = "PacketTracer7drv";
inherit version;
dontUnpack = true;
src = requireFile {
name = "PacketTracer_${builtins.replaceStrings ["."] [""] version}_amd64.deb";
sha256 = "c39802d15dd61d00ba27fb8c116da45fd8562ab4b49996555ad66b88deace27f";
url = "https://www.netacad.com";
};
nativeBuildInputs = [ dpkg makeWrapper ];
installPhase = ''
dpkg-deb -x $src $out
makeWrapper "$out/opt/pt/bin/PacketTracer7" "$out/bin/packettracer7" \
--prefix LD_LIBRARY_PATH : "$out/opt/pt/bin"
'';
};
desktopItem = makeDesktopItem {
name = "cisco-pt7.desktop";
desktopName = "Cisco Packet Tracer 7";
icon = "${ptFiles}/opt/pt/art/app.png";
exec = "packettracer7 %f";
mimeType = "application/x-pkt;application/x-pka;application/x-pkz;";
};
fhs = buildFHSUserEnvBubblewrap {
name = "packettracer7";
runScript = "${ptFiles}/bin/packettracer7";
targetPkgs = pkgs: with pkgs; [
alsa-lib
dbus
expat
fontconfig
glib
libglvnd
libpulseaudio
libudev0-shim
libxkbcommon
libxml2
libxslt
nspr
nss
xorg.libICE
xorg.libSM
xorg.libX11
xorg.libXScrnSaver
];
};
in stdenv.mkDerivation {
pname = "ciscoPacketTracer7";
inherit version;
dontUnpack = true;
installPhase = ''
mkdir $out
${lndir}/bin/lndir -silent ${fhs} $out
'';
desktopItems = [ desktopItem ];
nativeBuildInputs = [ copyDesktopItems ];
meta = with lib; {
description = "Network simulation tool from Cisco";
homepage = "https://www.netacad.com/courses/packet-tracer";
license = licenses.unfree;
maintainers = with maintainers; [ lucasew ];
platforms = [ "x86_64-linux" ];
};
}

View file

@ -0,0 +1,131 @@
{ stdenv
, lib
, alsa-lib
, autoPatchelfHook
, buildFHSUserEnvBubblewrap
, callPackage
, copyDesktopItems
, dbus
, dpkg
, expat
, fontconfig
, glib
, libdrm
, libglvnd
, libpulseaudio
, libudev0-shim
, libxkbcommon
, libxml2
, libxslt
, lndir
, makeDesktopItem
, makeWrapper
, nspr
, nss
, requireFile
, xorg
}:
let
version = "8.0.1";
ptFiles = stdenv.mkDerivation {
name = "PacketTracer8Drv";
inherit version;
dontUnpack = true;
src = requireFile {
name = "CiscoPacketTracer_${builtins.replaceStrings ["."] [""] version}_Ubuntu_64bit.deb";
sha256 = "77a25351b016faed7c78959819c16c7013caa89c6b1872cb888cd96edd259140";
url = "https://www.netacad.com";
};
nativeBuildInputs = [
alsa-lib
autoPatchelfHook
dbus
dpkg
expat
fontconfig
glib
libdrm
libglvnd
libpulseaudio
libudev0-shim
libxkbcommon
libxml2
libxslt
makeWrapper
nspr
nss
] ++ (with xorg; [
libICE
libSM
libX11
libxcb
libXcomposite
libXcursor
libXdamage
libXext
libXfixes
libXi
libXrandr
libXrender
libXScrnSaver
xcbutilimage
xcbutilkeysyms
xcbutilrenderutil
xcbutilwm
]);
installPhase = ''
dpkg-deb -x $src $out
chmod 755 "$out"
makeWrapper "$out/opt/pt/bin/PacketTracer" "$out/bin/packettracer" \
--prefix LD_LIBRARY_PATH : "$out/opt/pt/bin"
# Keep source archive cached, to avoid re-downloading
ln -s $src $out/usr/share/
'';
};
desktopItem = makeDesktopItem {
name = "cisco-pt8.desktop";
desktopName = "Cisco Packet Tracer 8";
icon = "${ptFiles}/opt/pt/art/app.png";
exec = "packettracer8 %f";
mimeType = "application/x-pkt;application/x-pka;application/x-pkz;";
};
fhs = buildFHSUserEnvBubblewrap {
name = "packettracer8";
runScript = "${ptFiles}/bin/packettracer";
targetPkgs = pkgs: [ libudev0-shim ];
extraInstallCommands = ''
mkdir -p "$out/share/applications"
cp "${desktopItem}"/share/applications/* "$out/share/applications/"
'';
};
in stdenv.mkDerivation {
pname = "ciscoPacketTracer8";
inherit version;
dontUnpack = true;
installPhase = ''
mkdir $out
${lndir}/bin/lndir -silent ${fhs} $out
'';
desktopItems = [ desktopItem ];
nativeBuildInputs = [ copyDesktopItems ];
meta = with lib; {
description = "Network simulation tool from Cisco";
homepage = "https://www.netacad.com/courses/packet-tracer";
license = licenses.unfree;
maintainers = with maintainers; [ lucasew ];
platforms = [ "x86_64-linux" ];
};
}

View file

@ -39,6 +39,10 @@ stdenv.mkDerivation {
dontWrapGApps = true;
preFixup = ''
qtWrapperArgs+=("''${gappsWrapperArgs[@]}")
# Users that set CLUTTER_BACKEND=wayland in their default environment will
# encounter a segfault due to:
# https://git.jami.net/savoirfairelinux/jami-client-gnome/-/issues/1100 .
qtWrapperArgs+=("--unset" "CLUTTER_BACKEND")
'';
buildInputs = [

View file

@ -49,6 +49,10 @@ let
++ lib.optionals stdenv.isLinux (readLinesToList ./config/ffmpeg_args_linux)
++ lib.optionals (stdenv.isx86_32 || stdenv.isx86_64) (readLinesToList ./config/ffmpeg_args_x86);
outputs = [ "out" "doc" ];
meta = old.meta // {
# undefined reference to `ff_nlmeans_init_aarch64'
broken = stdenv.isAarch64;
};
});
pjsip-jami = pjsip.overrideAttrs (old:

View file

@ -40,7 +40,7 @@ python3Packages.buildPythonApplication rec {
# relax version constraints of some dependencies
substituteInPlace setup.cfg \
--replace "clize==4.1.1" "clize" \
--replace "bleach==3.1.5" "bleach>=3.1.5,<4" \
--replace "bleach==3.1.5" "bleach>=3.1.5,<5" \
--replace "bottle==0.12.18" "bottle>=0.12.18,<1" \
--replace "Paste==3.4.3" "Paste>=3.4.3,<4"
'';

View file

@ -52,7 +52,7 @@ stdenv.mkDerivation rec {
makeWrapper $out/lib/runtime/bin/java $out/bin/jabref \
--add-flags '-Djava.library.path=${systemLibPaths}' --add-flags "-p $out/lib/app -m org.jabref/org.jabref.JabRefLauncher" \
--run 'export LD_LIBRARY_PATH=${systemLibPaths}:$LD_LIBRARY_PATH'
--prefix LD_LIBRARY_PATH : '${systemLibPaths}'
cp -r ${desktopItem}/share/applications $out/share/

View file

@ -10,11 +10,11 @@
stdenv.mkDerivation rec {
pname = "xmedcon";
version = "0.21.0";
version = "0.21.2";
src = fetchurl {
url = "https://prdownloads.sourceforge.net/${pname}/${pname}-${version}.tar.bz2";
sha256 = "0yfnbrcil5i76z1wbg308pb1mnjbcxy6nih46qpqs038v1lhh4q8";
sha256 = "0svff8rc3j2p47snaq1hx9mv4ydmxawpb0hf3d165g1ccjwvmm6m";
};
buildInputs = [
@ -31,6 +31,6 @@ stdenv.mkDerivation rec {
homepage = "https://xmedcon.sourceforge.io/Main/HomePage";
license = licenses.lgpl2Plus;
maintainers = with maintainers; [ arianvp flokli ];
platforms = with platforms; [ darwin linux ];
platforms = platforms.darwin ++ platforms.linux;
};
}

View file

@ -30,8 +30,12 @@ edk2.mkDerivation projectDscPath {
hardeningDisable = [ "format" "stackprotector" "pic" "fortify" ];
# Fails on i686 with:
# 'cc1: error: LTO support has not been enabled in this configuration'
NIX_CFLAGS_COMPILE = lib.optionals stdenv.isi686 [ "-fno-lto" ];
buildFlags =
lib.optional secureBoot "-D SECURE_BOOT_ENABLE=TRUE"
lib.optionals secureBoot [ "-D SECURE_BOOT_ENABLE=TRUE" ]
++ lib.optionals csmSupport [ "-D CSM_ENABLE" "-D FD_SIZE_2MB" ]
++ lib.optionals httpSupport [ "-D NETWORK_HTTP_ENABLE=TRUE" "-D NETWORK_HTTP_BOOT_ENABLE=TRUE" ]
++ lib.optionals tpmSupport [ "-D TPM_ENABLE" "-D TPM2_ENABLE" "-D TPM2_CONFIG_ENABLE"];

View file

@ -37,13 +37,13 @@ let
in
stdenv.mkDerivation rec {
pname = "crun";
version = "1.2";
version = "1.3";
src = fetchFromGitHub {
owner = "containers";
repo = pname;
rev = version;
sha256 = "sha256-7YDU7H4dVT6qI+Gt3bkm7vqHlU0Fr7ZhF4SWcA+RhYw=";
sha256 = "sha256-c0jXhqYdEpt4De1Z6VNwyrv0KJcf039Wp3ye0oTW0Qc=";
fetchSubmodules = true;
};

View file

@ -97,6 +97,19 @@ stdenv.mkDerivation rec {
url = "https://gitlab.com/qemu-project/qemu/-/commit/13b250b12ad3c59114a6a17d59caf073ce45b33a.patch";
sha256 = "0lkzfc7gdlvj4rz9wk07fskidaqysmx8911g914ds1jnczgk71mf";
})
# Fixes a crash that frequently happens in some setups that share /nix/store over 9p like nixos tests
# on some systems. Remove with next release.
(fetchpatch {
name = "fix-crash-in-v9fs_walk.patch";
url = "https://gitlab.com/qemu-project/qemu/-/commit/f83df00900816476cca41bb536e4d532b297d76e.patch";
sha256 = "sha256-LYGbBLS5YVgq8Bf7NVk7HBFxXq34NmZRPCEG79JPwk8=";
})
# Fixes an io error on discard/unmap operation for aio/file backend. Remove with next release.
(fetchpatch {
name = "fix-aio-discard-return-value.patch";
url = "https://gitlab.com/qemu-project/qemu/-/commit/13a028336f2c05e7ff47dfdaf30dfac7f4883e80.patch";
sha256 = "sha256-23xVixVl+JDBNdhe5j5WY8CB4MsnUo+sjrkAkG+JS6M=";
})
] ++ lib.optional nixosTestRunner ./force-uid0-on-9p.patch
++ lib.optionals stdenv.hostPlatform.isMusl [
(fetchpatch {

View file

@ -5,6 +5,7 @@
, gtk3
, withWayland ? false
, gtk-layer-shell
, stdenv
}:
rustPlatform.buildRustPackage rec {
@ -39,5 +40,6 @@ rustPlatform.buildRustPackage rec {
homepage = "https://github.com/elkowar/eww";
license = licenses.mit;
maintainers = with maintainers; [ figsoda legendofmiracles ];
broken = stdenv.isDarwin;
};
}

View file

@ -0,0 +1,13 @@
{ picom, lib, fetchFromGitHub }:
picom.overrideAttrs (oldAttrs: rec {
pname = "picom-next";
version = "unstable-2021-10-31";
src = fetchFromGitHub {
owner = "yshui";
repo = "picom";
rev = "fade045eadf171d2c732820d6ebde7d1943a1397";
sha256 = "fPiLZ63+Bw5VCxVNqj9i5had2YLa+jFMMf85MYdqvHU=";
};
meta.maintainers = with lib.maintainers; oldAttrs.meta.maintainers ++ [ GKasparov ];
})

View file

@ -2,13 +2,13 @@
stdenv.mkDerivation rec {
pname = "kora-icon-theme";
version = "1.4.5";
version = "1.4.7";
src = fetchFromGitHub {
owner = "bikass";
repo = "kora";
rev = "v${version}";
sha256 = "sha256-5tXXAfGY5JQ5RiKayUuQJDgX6sPHRi8Hy2ht/Hl0hdo=";
sha256 = "sha256-Ol4DrQJmQT/LIU5qWJJEm6od7e29h7g913YTFQjudBQ=";
};
nativeBuildInputs = [

View file

@ -0,0 +1,35 @@
{ stdenv, buildGoModule, fetchFromGitHub, lib }:
let
generator = buildGoModule rec {
pname = "v2ray-domain-list-community";
version = "20211103073737";
src = fetchFromGitHub {
owner = "v2fly";
repo = "domain-list-community";
rev = version;
sha256 = "sha256-NYgEXbow16w+XMRjbQG1cIn/BjPbbcj+uzb4kcVR6eI=";
};
vendorSha256 = "sha256-JuLU9v1ukVfAEtz07tGk66st1+sO4SBz83BlK3IPQwU=";
meta = with lib; {
description = "community managed domain list";
homepage = "https://github.com/v2fly/domain-list-community";
license = licenses.mit;
maintainers = with maintainers; [ nickcao ];
};
};
in
stdenv.mkDerivation {
inherit (generator) pname version src meta;
buildPhase = ''
runHook preBuild
${generator}/bin/domain-list-community -datapath $src/data --exportlists=category-ads-all,tld-cn,cn,tld-\!cn,geolocation-\!cn,apple,icloud
runHook postBuild
'';
installPhase = ''
runHook preInstall
install -Dm644 dlc.dat $out/share/v2ray/geosite.dat
runHook postInstall
'';
passthru.generator = generator;
}

View file

@ -16,13 +16,13 @@
mkDerivation rec {
pname = "libfm-qt";
version = "0.17.1";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = "libfm-qt";
rev = version;
sha256 = "0jdsqvwp81y4ylabrqdc673x80fp41rpp5w7c1v9zmk9k8z4s5ll";
sha256 = "1kk2cv9cp2gdj2pzdgm72c009iyl3mhrvsiz05kdxd4v1kn38ci1";
};
nativeBuildInputs = [

View file

@ -15,13 +15,13 @@
mkDerivation rec {
pname = "liblxqt";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "0n0pjz5wihchfcji8qal0lw8kzvv3im50v1lbwww4ymrgacz9h4l";
sha256 = "08cqvq99pvz8lz13273hlpv8160r6zyz4f7h4kl1g8xdga7m45gr";
};
nativeBuildInputs = [

View file

@ -10,13 +10,13 @@
mkDerivation rec {
pname = "libqtxdg";
version = "3.7.1";
version = "3.8.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "1x806hdics3d49ys0a2vkln9znidj82qscjnpcqxclxn26xqzd91";
sha256 = "14jrzwdmhgn6bcggmhxx5rdapjzm93cfkjjls3nii1glnkwzncxz";
};
nativeBuildInputs = [

View file

@ -9,13 +9,13 @@
mkDerivation rec {
pname = "libsysstat";
version = "0.4.5";
version = "0.4.6";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "14q55iayygmjh63zgsb9qa4af766gj9b0jsrmfn85fdiqb8p8yfz";
sha256 = "0z2r8041vqssm59lkb3ka7qis9br4wvavxzd45m3pnqlp7wwhkbn";
};
nativeBuildInputs = [

View file

@ -16,13 +16,13 @@
mkDerivation rec {
pname = "lximage-qt";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "1xajsblk2954crvligvrgwp7q1pj7124xdfnlq9k9q0ya2xc36lx";
sha256 = "1bf0smkawyibrabw7zcynwr2afpsv7pnnyxn4nqgh6mxnp7al157";
};
nativeBuildInputs = [

View file

@ -14,13 +14,13 @@
mkDerivation rec {
pname = "lxqt-about";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "011jcab47iif741azfgvf52my118nwkny5m0pa7nsqyv8ad1fsiw";
sha256 = "1fr2mx19ks4crh7cjc080vkrzldzgmghxvrzjqq7lspkzd5a0pjb";
};
nativeBuildInputs = [

View file

@ -15,13 +15,13 @@
mkDerivation rec {
pname = "lxqt-admin";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "1xi169gz1sarv7584kg33ymckqlx9ddci7r9m0dlm4a7mw7fm0lf";
sha256 = "06l7vs8aqx37bhrxf9xa16g7rdmia8j73q78qfj6syw57f3ssjr9";
};
nativeBuildInputs = [

View file

@ -14,13 +14,13 @@
mkDerivation rec {
pname = "lxqt-archiver";
version = "0.4.0";
version = "0.5.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = "lxqt-archiver";
rev = version;
sha256 = "0wpayzcyqcnvzk95bqql7p07l8p7mwdgdj7zlbcsdn0wis4yhjm6";
sha256 = "033lq7n34a5qk2zv8kr1633p5x2cjimv4w4n86w33xmcwya4yiji";
};
nativeBuildInputs = [

View file

@ -13,13 +13,13 @@
mkDerivation rec {
pname = "lxqt-build-tools";
version = "0.9.0";
version = "0.10.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "0zhcv6cbdn9fr5lpglz26gzssbxkpi824sgc0g7w3hh1z6nqqf8l";
sha256 = "1hb04zgpalxv6da3myf1dxsbjix15dczzfq8a24g5dg2zfhwpx21";
};
# Nix clang on darwin identifies as 'Clang', not 'AppleClang'

View file

@ -19,13 +19,13 @@
mkDerivation rec {
pname = "lxqt-config";
version = "0.17.1";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "0b9jihmsqgdfdsisz15j3p53fgf1w30s8irj9zjh52fsj58p924p";
sha256 = "0yllqjmj4xbqi5681ffjxmlwlf9k9bpy3hgs7li6lnn90yy46qmr";
};
nativeBuildInputs = [

View file

@ -15,13 +15,13 @@
mkDerivation rec {
pname = "lxqt-globalkeys";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "135292l8w9sngg437n1zigkap15apifyqd9847ln84bxsmcj8lay";
sha256 = "015nrlzlcams4k8svrq7692xbjlai1dmwvjdldncsbrgrmfa702m";
};
nativeBuildInputs = [

View file

@ -15,13 +15,13 @@
mkDerivation rec {
pname = "lxqt-notificationd";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "1r2cmxcjkm9lvb2ilq2winyqndnamsd9x2ynmfiqidby2pcr9i3a";
sha256 = "06gb8k1p24gm5axy42npq7n4lmsxb03a9kvzqby44qmgwh8pn069";
};
nativeBuildInputs = [

View file

@ -15,13 +15,13 @@
mkDerivation rec {
pname = "lxqt-openssh-askpass";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "18pn7kw9aw7859jnwvjnjcvr50pqsi8gqcxsbx9rvsjrybw2qcgc";
sha256 = "0fp5jq3j34p81y200jbyp7wcz04r7jk07bfwrigjwcyj2xknkrgw";
};
nativeBuildInputs = [

View file

@ -30,13 +30,13 @@
mkDerivation rec {
pname = "lxqt-panel";
version = "0.17.1";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "1wmm4sml7par5z9xcs5qx2y2pdbnnh66zs37jhx9f9ihcmh1sqlw";
sha256 = "0i63jyjg31336davjdak7z3as34gazx1lri65fk2f07kka9dx1jl";
};
nativeBuildInputs = [

View file

@ -19,13 +19,13 @@
mkDerivation rec {
pname = "lxqt-policykit";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "15f0hnif8zs38qgckif63dds9zgpp3dmg9pg3ppgh664lkbxx7n7";
sha256 = "0hmxzkkggnpci305xax9663cbjqdh6n0j0dawwcpwj4ks8mp7xh7";
};
nativeBuildInputs = [

View file

@ -18,13 +18,13 @@
mkDerivation rec {
pname = "lxqt-powermanagement";
version = "0.17.1";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "04prx15l05kw97mwajc8yi2s7p3n6amzs5jnnmh9payxzp6glzmk";
sha256 = "0dwz8z3463dz49d5k5bh7splb1zdi617xc4xzlqxxrxbf3n8x4ix";
};
nativeBuildInputs = [

View file

@ -15,13 +15,13 @@
mkDerivation rec {
pname = "lxqt-qtplugin";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "168ii015j57hkccdh27h2fdh8yzs8nzy8nw20wnx6fbcg5401666";
sha256 = "1vr2hlv1q9xwkh9bapy29g9fi90d33xw7pr9zc1bfma6j152qs36";
};
nativeBuildInputs = [

View file

@ -20,13 +20,13 @@
mkDerivation rec {
pname = "lxqt-runner";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "167gzn6aqk7akzbmrnm7nmcpkl0nphr8axbfgwnw552dnk6v8gn0";
sha256 = "06b7l2jkh0h4ikddh82nxkz7qhg5ap7l016klg3jl2x659z59hpj";
};
nativeBuildInputs = [

View file

@ -19,13 +19,13 @@
mkDerivation rec {
pname = "lxqt-session";
version = "0.17.1";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "1nhw3y3dm4crawc1905l6drn0i79fs1dzs8iak0vmmplbiv3fvgg";
sha256 = "0g355dmlyz8iljw953gp5jqlz02abd1ksssah826hxcy4j89mk7s";
};
nativeBuildInputs = [

View file

@ -16,13 +16,13 @@
mkDerivation rec {
pname = "lxqt-sudo";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "10s8k83mkqiakh18mh1l7idjp95cy49rg8dh14cy159dk8mchcd0";
sha256 = "1y2vq3n5sv6cxqpnz79kl3dybfbw65z93cahdz8m6gplzpp24gn4";
};
nativeBuildInputs = [

View file

@ -8,13 +8,13 @@
mkDerivation rec {
pname = "lxqt-themes";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "13zh5yrq0f96cn5m6i7zdvgb9iw656fad5ps0s2zx6x8mj2mv64f";
sha256 = "1viaqmcq4axwsq5vrr08j95swapbqnwmv064kaijm1jj9csadsvv";
};
nativeBuildInputs = [

View file

@ -13,13 +13,13 @@
mkDerivation rec {
pname = "pavucontrol-qt";
version = "0.17.0";
version = "1.0.0";
src = fetchFromGitHub {
owner = "lxqt";
repo = pname;
rev = version;
sha256 = "0syc4bc2k7961la2c77787akhcljspq3s2nyqvb7mq7ddq1xn0wx";
sha256 = "1n8h8flcm0na7n295lkjv49brj6razwml21wwrinwllw7s948qp0";
};
nativeBuildInputs = [

Some files were not shown because too many files have changed in this diff Show more