3
0
Fork 0
forked from mirrors/nixpkgs
nixpkgs/lib/test-driver/test-driver.pl

63 lines
1.9 KiB
Perl
Raw Normal View History

* Stuff for automatic and manual testing of NixOS VMs. lib/build-vms.nix contains a function `buildVirtualNetwork' that takes a specification of a network of machines (as an attribute set of NixOS machine configurations) and builds a script that starts each configuration in a separate QEMU/KVM VM and connects them together in a virtual network. This script can be run manually to test the VMs interactively. There is also a function `runTests' that starts and runs the virtual network in a derivation, and then executes a test specification that tells the VMs to do certain things (i.e., letting one VM send an HTTP request to a webserver on another VM). The tests are written in Perl (for now). tests/subversion.nix shows a simple example, namely a network of two machines: a webserver that runs the Subversion subservice, and a client. Apache, Subversion and a few other packages are built with coverage analysis instrumentation. For instance, $ nix-build tests/subversion.nix -A vms $ ./result/bin/run-vms starts two QEMU/KVM instances. When they have finished booting, the webserver can be accessed from the host through http://localhost:8081/. It also has a small test suite: $ nix-build tests/subversion.nix -A report This runs the VMs in a derivation, runs the tests, and then produces a distributed code coverage analysis report (i.e. it shows the combined coverage on both machines). The Perl test driver program is in lib/test-driver. It executes commands on the guest machines by connecting to a root shell running on port 514 (provided by modules/testing/test-instrumentation.nix). The VMs are connected together in a virtual network using QEMU's multicast feature. This isn't very secure. At the very least, other processes on the same machine can listen to or send packets on the virtual network. On the plus side, we don't need to be root to set up a multicast virtual network, so we can do it from a derivation. Maybe we can use VDE instead. (Moved from the vario repository.) svn path=/nixos/trunk/; revision=16899
2009-08-31 15:25:12 +01:00
use strict;
use Machine;
$SIG{PIPE} = 'IGNORE'; # because Unix domain sockets may die unexpectedly
my %vms;
my $context = "";
foreach my $vmScript (@ARGV) {
my $vm = Machine->new($vmScript);
$vms{$vm->name} = $vm;
$context .= "my \$" . $vm->name . " = \$vms{'" . $vm->name . "'}; ";
}
sub startAll {
$_->start foreach values %vms;
}
sub runTests {
eval "$context $ENV{tests}";
die $@ if $@;
# Copy the kernel coverage data for each machine, if the kernel
# has been compiled with coverage instrumentation.
foreach my $vm (values %vms) {
my ($status, $out) = $vm->execute("test -e /proc/gcov");
next if $status != 0;
# Figure out where to put the *.gcda files so that the report
# generator can find the corresponding kernel sources.
my $kernelDir = $vm->mustSucceed("echo \$(dirname \$(readlink -f /var/run/current-system/kernel))/.build/linux-*");
chomp $kernelDir;
my $coverageDir = "/hostfs" . $vm->stateDir() . "/coverage-data/$kernelDir";
# Copy all the *.gcda files. The ones under
# /proc/gcov/module/nix/store are the kernel modules in the
# initrd to which we have applied nuke-refs in
# makeModuleClosure. This confuses the gcov module a bit.
$vm->execute("for i in \$(cd /proc/gcov && find -name module -prune -o -name '*.gcda'); do echo \$i; mkdir -p $coverageDir/\$(dirname \$i); cp -v /proc/gcov/\$i $coverageDir/\$i; done");
$vm->execute("for i in \$(cd /proc/gcov/module/nix/store/*/.build/* && find -name module -prune -o -name '*.gcda'); do mkdir -p $coverageDir/\$(dirname \$i); cp /proc/gcov/module/nix/store/*/.build/*/\$i $coverageDir/\$i; done");
}
* Stuff for automatic and manual testing of NixOS VMs. lib/build-vms.nix contains a function `buildVirtualNetwork' that takes a specification of a network of machines (as an attribute set of NixOS machine configurations) and builds a script that starts each configuration in a separate QEMU/KVM VM and connects them together in a virtual network. This script can be run manually to test the VMs interactively. There is also a function `runTests' that starts and runs the virtual network in a derivation, and then executes a test specification that tells the VMs to do certain things (i.e., letting one VM send an HTTP request to a webserver on another VM). The tests are written in Perl (for now). tests/subversion.nix shows a simple example, namely a network of two machines: a webserver that runs the Subversion subservice, and a client. Apache, Subversion and a few other packages are built with coverage analysis instrumentation. For instance, $ nix-build tests/subversion.nix -A vms $ ./result/bin/run-vms starts two QEMU/KVM instances. When they have finished booting, the webserver can be accessed from the host through http://localhost:8081/. It also has a small test suite: $ nix-build tests/subversion.nix -A report This runs the VMs in a derivation, runs the tests, and then produces a distributed code coverage analysis report (i.e. it shows the combined coverage on both machines). The Perl test driver program is in lib/test-driver. It executes commands on the guest machines by connecting to a root shell running on port 514 (provided by modules/testing/test-instrumentation.nix). The VMs are connected together in a virtual network using QEMU's multicast feature. This isn't very secure. At the very least, other processes on the same machine can listen to or send packets on the virtual network. On the plus side, we don't need to be root to set up a multicast virtual network, so we can do it from a derivation. Maybe we can use VDE instead. (Moved from the vario repository.) svn path=/nixos/trunk/; revision=16899
2009-08-31 15:25:12 +01:00
}
END {
foreach my $vm (values %vms) {
if ($vm->{pid}) {
print STDERR "killing ", $vm->{name}, " (pid ", $vm->{pid}, ")\n";
kill 9, $vm->{pid};
}
}
}
runTests;
print STDERR "DONE\n";