mirror of
https://github.com/NixOS/nixpkgs.git
synced 2024-11-21 05:00:16 +00:00
325 lines
15 KiB
XML
325 lines
15 KiB
XML
<chapter xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
version="5.0"
|
|
xml:id="module-services-foundationdb">
|
|
<title>FoundationDB</title>
|
|
<para>
|
|
<emphasis>Source:</emphasis> <filename>modules/services/databases/foundationdb.nix</filename>
|
|
</para>
|
|
<para>
|
|
<emphasis>Upstream documentation:</emphasis> <link xlink:href="https://apple.github.io/foundationdb/"/>
|
|
</para>
|
|
<para>
|
|
<emphasis>Maintainer:</emphasis> Austin Seipp
|
|
</para>
|
|
<para>
|
|
<emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x
|
|
</para>
|
|
<para>
|
|
FoundationDB (or "FDB") is an open source, distributed, transactional key-value store.
|
|
</para>
|
|
<section xml:id="module-services-foundationdb-configuring">
|
|
<title>Configuring and basic setup</title>
|
|
|
|
<para>
|
|
To enable FoundationDB, add the following to your <filename>configuration.nix</filename>:
|
|
<programlisting>
|
|
services.foundationdb.enable = true;
|
|
services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
|
|
</programlisting>
|
|
</para>
|
|
|
|
<para>
|
|
The <option>services.foundationdb.package</option> option is required, and must always be specified. Due to the fact FoundationDB network protocols and on-disk storage formats may change between (major) versions, and upgrades must be explicitly handled by the user, you must always manually specify this yourself so that the NixOS module will use the proper version. Note that minor, bugfix releases are always compatible.
|
|
</para>
|
|
|
|
<para>
|
|
After running <command>nixos-rebuild</command>, you can verify whether FoundationDB is running by executing <command>fdbcli</command> (which is added to <option>environment.systemPackages</option>):
|
|
<screen>
|
|
<prompt>$ </prompt>sudo -u foundationdb fdbcli
|
|
Using cluster file `/etc/foundationdb/fdb.cluster'.
|
|
|
|
The database is available.
|
|
|
|
Welcome to the fdbcli. For help, type `help'.
|
|
<prompt>fdb> </prompt>status
|
|
|
|
Using cluster file `/etc/foundationdb/fdb.cluster'.
|
|
|
|
Configuration:
|
|
Redundancy mode - single
|
|
Storage engine - memory
|
|
Coordinators - 1
|
|
|
|
Cluster:
|
|
FoundationDB processes - 1
|
|
Machines - 1
|
|
Memory availability - 5.4 GB per process on machine with least available
|
|
Fault Tolerance - 0 machines
|
|
Server time - 04/20/18 15:21:14
|
|
|
|
...
|
|
|
|
<prompt>fdb></prompt>
|
|
</screen>
|
|
</para>
|
|
|
|
<para>
|
|
You can also write programs using the available client libraries. For example, the following Python program can be run in order to grab the cluster status, as a quick example. (This example uses <command>nix-shell</command> shebang support to automatically supply the necessary Python modules).
|
|
<screen>
|
|
<prompt>a@link> </prompt>cat fdb-status.py
|
|
#! /usr/bin/env nix-shell
|
|
#! nix-shell -i python -p python pythonPackages.foundationdb52
|
|
|
|
import fdb
|
|
import json
|
|
|
|
def main():
|
|
fdb.api_version(520)
|
|
db = fdb.open()
|
|
|
|
@fdb.transactional
|
|
def get_status(tr):
|
|
return str(tr['\xff\xff/status/json'])
|
|
|
|
obj = json.loads(get_status(db))
|
|
print('FoundationDB available: %s' % obj['client']['database_status']['available'])
|
|
|
|
if __name__ == "__main__":
|
|
main()
|
|
<prompt>a@link> </prompt>chmod +x fdb-status.py
|
|
<prompt>a@link> </prompt>./fdb-status.py
|
|
FoundationDB available: True
|
|
<prompt>a@link></prompt>
|
|
</screen>
|
|
</para>
|
|
|
|
<para>
|
|
FoundationDB is run under the <command>foundationdb</command> user and group by default, but this may be changed in the NixOS configuration. The systemd unit <command>foundationdb.service</command> controls the <command>fdbmonitor</command> process.
|
|
</para>
|
|
|
|
<para>
|
|
By default, the NixOS module for FoundationDB creates a single SSD-storage based database for development and basic usage. This storage engine is designed for SSDs and will perform poorly on HDDs; however it can handle far more data than the alternative "memory" engine and is a better default choice for most deployments. (Note that you can change the storage backend on-the-fly for a given FoundationDB cluster using <command>fdbcli</command>.)
|
|
</para>
|
|
|
|
<para>
|
|
Furthermore, only 1 server process and 1 backup agent are started in the default configuration. See below for more on scaling to increase this.
|
|
</para>
|
|
|
|
<para>
|
|
FoundationDB stores all data for all server processes under <filename>/var/lib/foundationdb</filename>. You can override this using <option>services.foundationdb.dataDir</option>, e.g.
|
|
<programlisting>
|
|
services.foundationdb.dataDir = "/data/fdb";
|
|
</programlisting>
|
|
</para>
|
|
|
|
<para>
|
|
Similarly, logs are stored under <filename>/var/log/foundationdb</filename> by default, and there is a corresponding <option>services.foundationdb.logDir</option> as well.
|
|
</para>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-scaling">
|
|
<title>Scaling processes and backup agents</title>
|
|
|
|
<para>
|
|
Scaling the number of server processes is quite easy; simply specify <option>services.foundationdb.serverProcesses</option> to be the number of FoundationDB worker processes that should be started on the machine.
|
|
</para>
|
|
|
|
<para>
|
|
FoundationDB worker processes typically require 4GB of RAM per-process at minimum for good performance, so this option is set to 1 by default since the maximum amount of RAM is unknown. You're advised to abide by this restriction, so pick a number of processes so that each has 4GB or more.
|
|
</para>
|
|
|
|
<para>
|
|
A similar option exists in order to scale backup agent processes, <option>services.foundationdb.backupProcesses</option>. Backup agents are not as performance/RAM sensitive, so feel free to experiment with the number of available backup processes.
|
|
</para>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-clustering">
|
|
<title>Clustering</title>
|
|
|
|
<para>
|
|
FoundationDB on NixOS works similarly to other Linux systems, so this section will be brief. Please refer to the full FoundationDB documentation for more on clustering.
|
|
</para>
|
|
|
|
<para>
|
|
FoundationDB organizes clusters using a set of <emphasis>coordinators</emphasis>, which are just specially-designated worker processes. By default, every installation of FoundationDB on NixOS will start as its own individual cluster, with a single coordinator: the first worker process on <command>localhost</command>.
|
|
</para>
|
|
|
|
<para>
|
|
Coordinators are specified globally using the <command>/etc/foundationdb/fdb.cluster</command> file, which all servers and client applications will use to find and join coordinators. Note that this file <emphasis>can not</emphasis> be managed by NixOS so easily: FoundationDB is designed so that it will rewrite the file at runtime for all clients and nodes when cluster coordinators change, with clients transparently handling this without intervention. It is fundamentally a mutable file, and you should not try to manage it in any way in NixOS.
|
|
</para>
|
|
|
|
<para>
|
|
When dealing with a cluster, there are two main things you want to do:
|
|
</para>
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Add a node to the cluster for storage/compute.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Promote an ordinary worker to a coordinator.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
|
|
<para>
|
|
A node must already be a member of the cluster in order to properly be promoted to a coordinator, so you must always add it first if you wish to promote it.
|
|
</para>
|
|
|
|
<para>
|
|
To add a machine to a FoundationDB cluster:
|
|
</para>
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Choose one of the servers to start as the initial coordinator.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Copy the <command>/etc/foundationdb/fdb.cluster</command> file from this server to all the other servers. Restart FoundationDB on all of these other servers, so they join the cluster.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
All of these servers are now connected and working together in the cluster, under the chosen coordinator.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
|
|
<para>
|
|
At this point, you can add as many nodes as you want by just repeating the above steps. By default there will still be a single coordinator: you can use <command>fdbcli</command> to change this and add new coordinators.
|
|
</para>
|
|
|
|
<para>
|
|
As a convenience, FoundationDB can automatically assign coordinators based on the redundancy mode you wish to achieve for the cluster. Once all the nodes have been joined, simply set the replication policy, and then issue the <command>coordinators auto</command> command
|
|
</para>
|
|
|
|
<para>
|
|
For example, assuming we have 3 nodes available, we can enable double redundancy mode, then auto-select coordinators. For double redundancy, 3 coordinators is ideal: therefore FoundationDB will make <emphasis>every</emphasis> node a coordinator automatically:
|
|
</para>
|
|
|
|
<screen>
|
|
<prompt>fdbcli> </prompt>configure double ssd
|
|
<prompt>fdbcli> </prompt>coordinators auto
|
|
</screen>
|
|
|
|
<para>
|
|
This will transparently update all the servers within seconds, and appropriately rewrite the <command>fdb.cluster</command> file, as well as informing all client processes to do the same.
|
|
</para>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-connectivity">
|
|
<title>Client connectivity</title>
|
|
|
|
<para>
|
|
By default, all clients must use the current <command>fdb.cluster</command> file to access a given FoundationDB cluster. This file is located by default in <command>/etc/foundationdb/fdb.cluster</command> on all machines with the FoundationDB service enabled, so you may copy the active one from your cluster to a new node in order to connect, if it is not part of the cluster.
|
|
</para>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-authorization">
|
|
<title>Client authorization and TLS</title>
|
|
|
|
<para>
|
|
By default, any user who can connect to a FoundationDB process with the correct cluster configuration can access anything. FoundationDB uses a pluggable design to transport security, and out of the box it supports a LibreSSL-based plugin for TLS support. This plugin not only does in-flight encryption, but also performs client authorization based on the given endpoint's certificate chain. For example, a FoundationDB server may be configured to only accept client connections over TLS, where the client TLS certificate is from organization <emphasis>Acme Co</emphasis> in the <emphasis>Research and Development</emphasis> unit.
|
|
</para>
|
|
|
|
<para>
|
|
Configuring TLS with FoundationDB is done using the <option>services.foundationdb.tls</option> options in order to control the peer verification string, as well as the certificate and its private key.
|
|
</para>
|
|
|
|
<para>
|
|
Note that the certificate and its private key must be accessible to the FoundationDB user account that the server runs under. These files are also NOT managed by NixOS, as putting them into the store may reveal private information.
|
|
</para>
|
|
|
|
<para>
|
|
After you have a key and certificate file in place, it is not enough to simply set the NixOS module options -- you must also configure the <command>fdb.cluster</command> file to specify that a given set of coordinators use TLS. This is as simple as adding the suffix <command>:tls</command> to your cluster coordinator configuration, after the port number. For example, assuming you have a coordinator on localhost with the default configuration, simply specifying:
|
|
</para>
|
|
|
|
<programlisting>
|
|
XXXXXX:XXXXXX@127.0.0.1:4500:tls
|
|
</programlisting>
|
|
|
|
<para>
|
|
will configure all clients and server processes to use TLS from now on.
|
|
</para>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-disaster-recovery">
|
|
<title>Backups and Disaster Recovery</title>
|
|
|
|
<para>
|
|
The usual rules for doing FoundationDB backups apply on NixOS as written in the FoundationDB manual. However, one important difference is the security profile for NixOS: by default, the <command>foundationdb</command> systemd unit uses <emphasis>Linux namespaces</emphasis> to restrict write access to the system, except for the log directory, data directory, and the <command>/etc/foundationdb/</command> directory. This is enforced by default and cannot be disabled.
|
|
</para>
|
|
|
|
<para>
|
|
However, a side effect of this is that the <command>fdbbackup</command> command doesn't work properly for local filesystem backups: FoundationDB uses a server process alongside the database processes to perform backups and copy the backups to the filesystem. As a result, this process is put under the restricted namespaces above: the backup process can only write to a limited number of paths.
|
|
</para>
|
|
|
|
<para>
|
|
In order to allow flexible backup locations on local disks, the FoundationDB NixOS module supports a <option>services.foundationdb.extraReadWritePaths</option> option. This option takes a list of paths, and adds them to the systemd unit, allowing the processes inside the service to write (and read) the specified directories.
|
|
</para>
|
|
|
|
<para>
|
|
For example, to create backups in <command>/opt/fdb-backups</command>, first set up the paths in the module options:
|
|
</para>
|
|
|
|
<programlisting>
|
|
services.foundationdb.extraReadWritePaths = [ "/opt/fdb-backups" ];
|
|
</programlisting>
|
|
|
|
<para>
|
|
Restart the FoundationDB service, and it will now be able to write to this directory (even if it does not yet exist.) Note: this path <emphasis>must</emphasis> exist before restarting the unit. Otherwise, systemd will not include it in the private FoundationDB namespace (and it will not add it dynamically at runtime).
|
|
</para>
|
|
|
|
<para>
|
|
You can now perform a backup:
|
|
</para>
|
|
|
|
<screen>
|
|
<prompt>$ </prompt>sudo -u foundationdb fdbbackup start -t default -d file:///opt/fdb-backups
|
|
<prompt>$ </prompt>sudo -u foundationdb fdbbackup status -t default
|
|
</screen>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-limitations">
|
|
<title>Known limitations</title>
|
|
|
|
<para>
|
|
The FoundationDB setup for NixOS should currently be considered beta. FoundationDB is not new software, but the NixOS compilation and integration has only undergone fairly basic testing of all the available functionality.
|
|
</para>
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
There is no way to specify individual parameters for individual <command>fdbserver</command> processes. Currently, all server processes inherit all the global <command>fdbmonitor</command> settings.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Ruby bindings are not currently installed.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Go bindings are not currently installed.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-options">
|
|
<title>Options</title>
|
|
|
|
<para>
|
|
NixOS's FoundationDB module allows you to configure all of the most relevant configuration options for <command>fdbmonitor</command>, matching it quite closely. A complete list of options for the FoundationDB module may be found <link linkend="opt-services.foundationdb.enable">here</link>. You should also read the FoundationDB documentation as well.
|
|
</para>
|
|
</section>
|
|
<section xml:id="module-services-foundationdb-full-docs">
|
|
<title>Full documentation</title>
|
|
|
|
<para>
|
|
FoundationDB is a complex piece of software, and requires careful administration to properly use. Full documentation for administration can be found here: <link xlink:href="https://apple.github.io/foundationdb/"/>.
|
|
</para>
|
|
</section>
|
|
</chapter>
|