1
0
Fork 1
mirror of https://github.com/NixOS/nixpkgs.git synced 2024-09-11 15:08:33 +01:00

nixos: use only URI fragment in manual options links

This commit is contained in:
Bobby Rong 2021-07-04 08:24:44 +08:00
parent f8bdee0054
commit ad393d5f63
18 changed files with 81 additions and 99 deletions

View file

@ -30,7 +30,7 @@ $ export \
```
The second mechanism is to add the OpenCL driver package to
[`hardware.opengl.extraPackages`](options.html#opt-hardware.opengl.extraPackages).
[](#opt-hardware.opengl.extraPackages).
This links the ICD file under `/run/opengl-driver`, where it will be visible
to the ICD loader.
@ -51,7 +51,7 @@ Platform Vendor Advanced Micro Devices, Inc.
Modern AMD [Graphics Core
Next](https://en.wikipedia.org/wiki/Graphics_Core_Next) (GCN) GPUs are
supported through the rocm-opencl-icd package. Adding this package to
[`hardware.opengl.extraPackages`](options.html#opt-hardware.opengl.extraPackages)
[](#opt-hardware.opengl.extraPackages)
enables OpenCL support:
```nix
@ -71,7 +71,7 @@ proprietary Intel OpenCL runtime, in the intel-ocl package, is an
alternative for Gen7 GPUs.
The intel-compute-runtime, beignet, or intel-ocl package can be added to
[`hardware.opengl.extraPackages`](options.html#opt-hardware.opengl.extraPackages)
[](#opt-hardware.opengl.extraPackages)
to enable OpenCL support. For example, for Gen8 and later GPUs, the following
configuration can be used:
@ -88,7 +88,7 @@ compute API for GPUs. It is used directly by games or indirectly though
compatibility layers like
[DXVK](https://github.com/doitsujin/dxvk/wiki).
By default, if [`hardware.opengl.driSupport`](options.html#opt-hardware.opengl.driSupport)
By default, if [](#opt-hardware.opengl.driSupport)
is enabled, mesa is installed and provides Vulkan for supported hardware.
Similar to OpenCL, Vulkan drivers are loaded through the *Installable
@ -108,7 +108,7 @@ $ export \
```
The second mechanism is to add the Vulkan driver package to
[`hardware.opengl.extraPackages`](options.html#opt-hardware.opengl.extraPackages).
[](#opt-hardware.opengl.extraPackages).
This links the ICD file under `/run/opengl-driver`, where it will be
visible to the ICD loader.
@ -138,7 +138,7 @@ Modern AMD [Graphics Core
Next](https://en.wikipedia.org/wiki/Graphics_Core_Next) (GCN) GPUs are
supported through either radv, which is part of mesa, or the amdvlk
package. Adding the amdvlk package to
[`hardware.opengl.extraPackages`](options.html#opt-hardware.opengl.extraPackages)
[](#opt-hardware.opengl.extraPackages)
makes amdvlk the default driver and hides radv and lavapipe from the device list.
A specific driver can be forced as follows:

View file

@ -39,8 +39,8 @@ services.kubernetes.roles = [ "master" "node" ];
```
Note: Assigning either role will also default both
[`services.kubernetes.flannel.enable`](options.html#opt-services.kubernetes.flannel.enable)
and [`services.kubernetes.easyCerts`](options.html#opt-services.kubernetes.easyCerts)
[](#opt-services.kubernetes.flannel.enable)
and [](#opt-services.kubernetes.easyCerts)
to true. This sets up flannel as CNI and activates automatic PKI bootstrapping.
As of kubernetes 1.10.X it has been deprecated to open non-tls-enabled
@ -48,12 +48,12 @@ ports on kubernetes components. Thus, from NixOS 19.03 all plain HTTP
ports have been disabled by default. While opening insecure ports is
still possible, it is recommended not to bind these to other interfaces
than loopback. To re-enable the insecure port on the apiserver, see options:
[`services.kubernetes.apiserver.insecurePort`](options.html#opt-services.kubernetes.apiserver.insecurePort) and
[`services.kubernetes.apiserver.insecureBindAddress`](options.html#opt-services.kubernetes.apiserver.insecureBindAddress)
[](#opt-services.kubernetes.apiserver.insecurePort) and
[](#opt-services.kubernetes.apiserver.insecureBindAddress)
::: {.note}
As of NixOS 19.03, it is mandatory to configure:
[`services.kubernetes.masterAddress`](options.html#opt-services.kubernetes.masterAddress).
[](#opt-services.kubernetes.masterAddress).
The masterAddress must be resolveable and routeable by all cluster nodes.
In single node clusters, this can be set to `localhost`.
:::
@ -69,19 +69,19 @@ Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).
The NixOS kubernetes module provides an option for automatic certificate
bootstrapping and configuration,
[`services.kubernetes.easyCerts`](options.html#opt-services.kubernetes.easyCerts).
[](#opt-services.kubernetes.easyCerts).
The PKI bootstrapping process involves setting up a certificate authority (CA)
daemon (cfssl) on the kubernetes master node. cfssl generates a CA-cert
for the cluster, and uses the CA-cert for signing subordinate certs issued
to each of the cluster components. Subsequently, the certmgr daemon monitors
active certificates and renews them when needed. For single node Kubernetes
clusters, setting [`services.kubernetes.easyCerts`](options.html#opt-services.kubernetes.easyCerts)
clusters, setting [](#opt-services.kubernetes.easyCerts)
= true is sufficient and no further action is required. For joining extra node
machines to an existing cluster on the other hand, establishing initial
trust is mandatory.
To add new nodes to the cluster: On any (non-master) cluster node where
[`services.kubernetes.easyCerts`](options.html#opt-services.kubernetes.easyCerts)
[](#opt-services.kubernetes.easyCerts)
is enabled, the helper script `nixos-kubernetes-node-join` is available on PATH.
Given a token on stdin, it will copy the token to the kubernetes secrets directory
and restart the certmgr service. As requested certificates are issued, the
@ -96,7 +96,7 @@ In order to interact with an RBAC-enabled cluster as an administrator,
one needs to have cluster-admin privileges. By default, when easyCerts
is enabled, a cluster-admin kubeconfig file is generated and linked into
`/etc/kubernetes/cluster-admin.kubeconfig` as determined by
[`services.kubernetes.pki.etcClusterAdminKubeconfig`](options.html#opt-services.kubernetes.pki.etcClusterAdminKubeconfig).
[](#opt-services.kubernetes.pki.etcClusterAdminKubeconfig).
`export KUBECONFIG=/etc/kubernetes/cluster-admin.kubeconfig` will make
kubectl use this kubeconfig to access and authenticate the cluster. The
cluster-admin kubeconfig references an auto-generated keypair owned by

View file

@ -42,14 +42,14 @@ something as a kernel module).
Kernel modules for hardware devices are generally loaded automatically
by `udev`. You can force a module to be loaded via
[`boot.kernelModules`](options.html#opt-boot.kernelModules), e.g.
[](#opt-boot.kernelModules), e.g.
```nix
boot.kernelModules = [ "fuse" "kvm-intel" "coretemp" ];
```
If the module is required early during the boot (e.g. to mount the root
file system), you can use [`boot.initrd.kernelModules`](options.html#opt-boot.initrd.kernelModules):
file system), you can use [](#opt-boot.initrd.kernelModules):
```nix
boot.initrd.kernelModules = [ "cifs" ];
@ -59,7 +59,7 @@ This causes the specified modules and their dependencies to be added to
the initial ramdisk.
Kernel runtime parameters can be set through
[`boot.kernel.sysctl`](options.html#opt-boot.kernel.sysctl), e.g.
[](#opt-boot.kernel.sysctl), e.g.
```nix
boot.kernel.sysctl."net.ipv4.tcp_keepalive_time" = 120;

View file

@ -34,7 +34,7 @@ SHA256:yjxl3UbTn31fLWeyLYTAKYJPRmzknjQZoyG8gSNEoIE my-user@workstation
To keep the key safe, change the ownership to `root:root` and make sure the permissions are `600`:
OpenSSH normally refuses to use the key if it's not well-protected.
The file system can be configured in NixOS via the usual [fileSystems](options.html#opt-fileSystems) option.
The file system can be configured in NixOS via the usual [fileSystems](#opt-fileSystems) option.
Here's a typical setup:
```nix
{

View file

@ -17,7 +17,7 @@ appropriate section of the Subversion
book](http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.serverconfig.httpd).
To configure, include in `/etc/nixos/configuration.nix` code to activate
Apache HTTP, setting [`services.httpd.adminAddr`](options.html#opt-services.httpd.adminAddr)
Apache HTTP, setting [](#opt-services.httpd.adminAddr)
appropriately:
```nix

View file

@ -24,10 +24,10 @@ log in via mechanisms that require a password. However, you can use the
`passwd` program to set a password, which is retained across invocations
of `nixos-rebuild`.
If you set [`users.mutableUsers`](options.html#opt-users.mutableUsers) to
If you set [](#opt-users.mutableUsers) to
false, then the contents of `/etc/passwd` and `/etc/group` will be congruent
to your NixOS configuration. For instance, if you remove a user from
[`users.users`](options.html#opt-users.users) and run nixos-rebuild, the user
[](#opt-users.users) and run nixos-rebuild, the user
account will cease to exist. Also, imperative commands for managing users and
groups, such as useradd, are no longer available. Passwords may still be
assigned by setting the user\'s

View file

@ -23,5 +23,5 @@ xdg.portal.wlr.enable = true;
```
and configure Pipewire using
[`services.pipewire.enable`](options.html#opt-services.pipewire.enable)
[](#opt-services.pipewire.enable)
and related options.

View file

@ -115,7 +115,7 @@ officially updated since 2015.
The results vary depending on the hardware, so you may have to try both
drivers. Use the option
[`services.xserver.videoDrivers`](options.html#opt-services.xserver.videoDrivers)
[](#opt-services.xserver.videoDrivers)
to set one. The recommended configuration for modern systems is:
```nix
@ -183,7 +183,7 @@ Latitude series) can be enabled as follows:
services.xserver.libinput.enable = true;
```
The driver has many options (see [Appendix A, Configuration Options](options.html)).
The driver has many options (see [](#ch-options)).
For instance, the following disables tap-to-click behavior:
```nix

View file

@ -22,13 +22,13 @@ services.picom = {
Some Xfce programs are not installed automatically. To install them
manually (system wide), put them into your
[`environment.systemPackages`](options.html#opt-environment.systemPackages) from `pkgs.xfce`.
[](#opt-environment.systemPackages) from `pkgs.xfce`.
## Thunar Plugins {#sec-xfce-thunar-plugins .unnumbered}
If you\'d like to add extra plugins to Thunar, add them to
[`services.xserver.desktopManager.xfce.thunarPlugins`](options.html#opt-services.xserver.desktopManager.xfce.thunarPlugins).
You shouldn\'t just add them to [`environment.systemPackages`](options.html#opt-environment.systemPackages).
[](#opt-services.xserver.desktopManager.xfce.thunarPlugins).
You shouldn\'t just add them to [](#opt-environment.systemPackages).
## Troubleshooting {#sec-xfce-troubleshooting .unnumbered}

View file

@ -36,10 +36,9 @@ $ export \
</programlisting>
<para>
The second mechanism is to add the OpenCL driver package to
<link xlink:href="options.html#opt-hardware.opengl.extraPackages"><literal>hardware.opengl.extraPackages</literal></link>.
This links the ICD file under
<literal>/run/opengl-driver</literal>, where it will be visible to
the ICD loader.
<xref linkend="opt-hardware.opengl.extraPackages" />. This links
the ICD file under <literal>/run/opengl-driver</literal>, where it
will be visible to the ICD loader.
</para>
<para>
The proper installation of OpenCL drivers can be verified through
@ -60,8 +59,8 @@ Platform Vendor Advanced Micro Devices, Inc.
<link xlink:href="https://en.wikipedia.org/wiki/Graphics_Core_Next">Graphics
Core Next</link> (GCN) GPUs are supported through the
rocm-opencl-icd package. Adding this package to
<link xlink:href="options.html#opt-hardware.opengl.extraPackages"><literal>hardware.opengl.extraPackages</literal></link>
enables OpenCL support:
<xref linkend="opt-hardware.opengl.extraPackages" /> enables
OpenCL support:
</para>
<programlisting language="bash">
hardware.opengl.extraPackages = [
@ -82,10 +81,9 @@ hardware.opengl.extraPackages = [
</para>
<para>
The intel-compute-runtime, beignet, or intel-ocl package can be
added to
<link xlink:href="options.html#opt-hardware.opengl.extraPackages"><literal>hardware.opengl.extraPackages</literal></link>
to enable OpenCL support. For example, for Gen8 and later GPUs,
the following configuration can be used:
added to <xref linkend="opt-hardware.opengl.extraPackages" /> to
enable OpenCL support. For example, for Gen8 and later GPUs, the
following configuration can be used:
</para>
<programlisting language="bash">
hardware.opengl.extraPackages = [
@ -103,8 +101,7 @@ hardware.opengl.extraPackages = [
<link xlink:href="https://github.com/doitsujin/dxvk/wiki">DXVK</link>.
</para>
<para>
By default, if
<link xlink:href="options.html#opt-hardware.opengl.driSupport"><literal>hardware.opengl.driSupport</literal></link>
By default, if <xref linkend="opt-hardware.opengl.driSupport" />
is enabled, mesa is installed and provides Vulkan for supported
hardware.
</para>
@ -129,10 +126,9 @@ $ export \
</programlisting>
<para>
The second mechanism is to add the Vulkan driver package to
<link xlink:href="options.html#opt-hardware.opengl.extraPackages"><literal>hardware.opengl.extraPackages</literal></link>.
This links the ICD file under
<literal>/run/opengl-driver</literal>, where it will be visible to
the ICD loader.
<xref linkend="opt-hardware.opengl.extraPackages" />. This links
the ICD file under <literal>/run/opengl-driver</literal>, where it
will be visible to the ICD loader.
</para>
<para>
The proper installation of Vulkan drivers can be verified through
@ -162,8 +158,7 @@ GPU1:
<link xlink:href="https://en.wikipedia.org/wiki/Graphics_Core_Next">Graphics
Core Next</link> (GCN) GPUs are supported through either radv,
which is part of mesa, or the amdvlk package. Adding the amdvlk
package to
<link xlink:href="options.html#opt-hardware.opengl.extraPackages"><literal>hardware.opengl.extraPackages</literal></link>
package to <xref linkend="opt-hardware.opengl.extraPackages" />
makes amdvlk the default driver and hides radv and lavapipe from
the device list. A specific driver can be forced as follows:
</para>

View file

@ -43,11 +43,9 @@ services.kubernetes.roles = [ &quot;master&quot; &quot;node&quot; ];
</programlisting>
<para>
Note: Assigning either role will also default both
<link xlink:href="options.html#opt-services.kubernetes.flannel.enable"><literal>services.kubernetes.flannel.enable</literal></link>
and
<link xlink:href="options.html#opt-services.kubernetes.easyCerts"><literal>services.kubernetes.easyCerts</literal></link>
to true. This sets up flannel as CNI and activates automatic PKI
bootstrapping.
<xref linkend="opt-services.kubernetes.flannel.enable" /> and
<xref linkend="opt-services.kubernetes.easyCerts" /> to true. This
sets up flannel as CNI and activates automatic PKI bootstrapping.
</para>
<para>
As of kubernetes 1.10.X it has been deprecated to open
@ -56,15 +54,15 @@ services.kubernetes.roles = [ &quot;master&quot; &quot;node&quot; ];
opening insecure ports is still possible, it is recommended not to
bind these to other interfaces than loopback. To re-enable the
insecure port on the apiserver, see options:
<link xlink:href="options.html#opt-services.kubernetes.apiserver.insecurePort"><literal>services.kubernetes.apiserver.insecurePort</literal></link>
<xref linkend="opt-services.kubernetes.apiserver.insecurePort" />
and
<link xlink:href="options.html#opt-services.kubernetes.apiserver.insecureBindAddress"><literal>services.kubernetes.apiserver.insecureBindAddress</literal></link>
<xref linkend="opt-services.kubernetes.apiserver.insecureBindAddress" />
</para>
<note>
<para>
As of NixOS 19.03, it is mandatory to configure:
<link xlink:href="options.html#opt-services.kubernetes.masterAddress"><literal>services.kubernetes.masterAddress</literal></link>.
The masterAddress must be resolveable and routeable by all cluster
<xref linkend="opt-services.kubernetes.masterAddress" />. The
masterAddress must be resolveable and routeable by all cluster
nodes. In single node clusters, this can be set to
<literal>localhost</literal>.
</para>
@ -83,24 +81,22 @@ services.kubernetes.roles = [ &quot;master&quot; &quot;node&quot; ];
<para>
The NixOS kubernetes module provides an option for automatic
certificate bootstrapping and configuration,
<link xlink:href="options.html#opt-services.kubernetes.easyCerts"><literal>services.kubernetes.easyCerts</literal></link>.
The PKI bootstrapping process involves setting up a certificate
authority (CA) daemon (cfssl) on the kubernetes master node. cfssl
generates a CA-cert for the cluster, and uses the CA-cert for
signing subordinate certs issued to each of the cluster components.
<xref linkend="opt-services.kubernetes.easyCerts" />. The PKI
bootstrapping process involves setting up a certificate authority
(CA) daemon (cfssl) on the kubernetes master node. cfssl generates a
CA-cert for the cluster, and uses the CA-cert for signing
subordinate certs issued to each of the cluster components.
Subsequently, the certmgr daemon monitors active certificates and
renews them when needed. For single node Kubernetes clusters,
setting
<link xlink:href="options.html#opt-services.kubernetes.easyCerts"><literal>services.kubernetes.easyCerts</literal></link>
= true is sufficient and no further action is required. For joining
extra node machines to an existing cluster on the other hand,
establishing initial trust is mandatory.
setting <xref linkend="opt-services.kubernetes.easyCerts" /> = true
is sufficient and no further action is required. For joining extra
node machines to an existing cluster on the other hand, establishing
initial trust is mandatory.
</para>
<para>
To add new nodes to the cluster: On any (non-master) cluster node
where
<link xlink:href="options.html#opt-services.kubernetes.easyCerts"><literal>services.kubernetes.easyCerts</literal></link>
is enabled, the helper script
where <xref linkend="opt-services.kubernetes.easyCerts" /> is
enabled, the helper script
<literal>nixos-kubernetes-node-join</literal> is available on PATH.
Given a token on stdin, it will copy the token to the kubernetes
secrets directory and restart the certmgr service. As requested
@ -120,7 +116,7 @@ services.kubernetes.roles = [ &quot;master&quot; &quot;node&quot; ];
is generated and linked into
<literal>/etc/kubernetes/cluster-admin.kubeconfig</literal> as
determined by
<link xlink:href="options.html#opt-services.kubernetes.pki.etcClusterAdminKubeconfig"><literal>services.kubernetes.pki.etcClusterAdminKubeconfig</literal></link>.
<xref linkend="opt-services.kubernetes.pki.etcClusterAdminKubeconfig" />.
<literal>export KUBECONFIG=/etc/kubernetes/cluster-admin.kubeconfig</literal>
will make kubectl use this kubeconfig to access and authenticate the
cluster. The cluster-admin kubeconfig references an auto-generated

View file

@ -48,9 +48,7 @@ nixpkgs.config.packageOverrides = pkgs:
<para>
Kernel modules for hardware devices are generally loaded
automatically by <literal>udev</literal>. You can force a module to
be loaded via
<link xlink:href="options.html#opt-boot.kernelModules"><literal>boot.kernelModules</literal></link>,
e.g.
be loaded via <xref linkend="opt-boot.kernelModules" />, e.g.
</para>
<programlisting language="bash">
boot.kernelModules = [ &quot;fuse&quot; &quot;kvm-intel&quot; &quot;coretemp&quot; ];
@ -58,7 +56,7 @@ boot.kernelModules = [ &quot;fuse&quot; &quot;kvm-intel&quot; &quot;coretemp&quo
<para>
If the module is required early during the boot (e.g. to mount the
root file system), you can use
<link xlink:href="options.html#opt-boot.initrd.kernelModules"><literal>boot.initrd.kernelModules</literal></link>:
<xref linkend="opt-boot.initrd.kernelModules" />:
</para>
<programlisting language="bash">
boot.initrd.kernelModules = [ &quot;cifs&quot; ];
@ -69,8 +67,7 @@ boot.initrd.kernelModules = [ &quot;cifs&quot; ];
</para>
<para>
Kernel runtime parameters can be set through
<link xlink:href="options.html#opt-boot.kernel.sysctl"><literal>boot.kernel.sysctl</literal></link>,
e.g.
<xref linkend="opt-boot.kernel.sysctl" />, e.g.
</para>
<programlisting language="bash">
boot.kernel.sysctl.&quot;net.ipv4.tcp_keepalive_time&quot; = 120;

View file

@ -51,8 +51,8 @@ SHA256:yjxl3UbTn31fLWeyLYTAKYJPRmzknjQZoyG8gSNEoIE my-user@workstation
</para>
<para>
The file system can be configured in NixOS via the usual
<link xlink:href="options.html#opt-fileSystems">fileSystems</link>
option. Heres a typical setup:
<link linkend="opt-fileSystems">fileSystems</link> option. Heres
a typical setup:
</para>
<programlisting language="bash">
{

View file

@ -23,8 +23,7 @@
To configure, include in
<literal>/etc/nixos/configuration.nix</literal> code to activate
Apache HTTP, setting
<link xlink:href="options.html#opt-services.httpd.adminAddr"><literal>services.httpd.adminAddr</literal></link>
appropriately:
<xref linkend="opt-services.httpd.adminAddr" /> appropriately:
</para>
<programlisting language="bash">
services.httpd.enable = true;

View file

@ -29,16 +29,14 @@ users.users.alice = {
retained across invocations of <literal>nixos-rebuild</literal>.
</para>
<para>
If you set
<link xlink:href="options.html#opt-users.mutableUsers"><literal>users.mutableUsers</literal></link>
to false, then the contents of <literal>/etc/passwd</literal> and
If you set <xref linkend="opt-users.mutableUsers" /> to false, then
the contents of <literal>/etc/passwd</literal> and
<literal>/etc/group</literal> will be congruent to your NixOS
configuration. For instance, if you remove a user from
<link xlink:href="options.html#opt-users.users"><literal>users.users</literal></link>
and run nixos-rebuild, the user account will cease to exist. Also,
imperative commands for managing users and groups, such as useradd,
are no longer available. Passwords may still be assigned by setting
the user's
<xref linkend="opt-users.users" /> and run nixos-rebuild, the user
account will cease to exist. Also, imperative commands for managing
users and groups, such as useradd, are no longer available.
Passwords may still be assigned by setting the user's
<link linkend="opt-users.users._name_.hashedPassword">hashedPassword</link>
option. A hashed password can be generated using
<literal>mkpasswd -m sha-512</literal>.

View file

@ -26,7 +26,6 @@ xdg.portal.wlr.enable = true;
</programlisting>
<para>
and configure Pipewire using
<link xlink:href="options.html#opt-services.pipewire.enable"><literal>services.pipewire.enable</literal></link>
and related options.
<xref linkend="opt-services.pipewire.enable" /> and related options.
</para>
</chapter>

View file

@ -128,8 +128,8 @@ services.xserver.displayManager.autoLogin.user = &quot;alice&quot;;
<para>
The results vary depending on the hardware, so you may have to try
both drivers. Use the option
<link xlink:href="options.html#opt-services.xserver.videoDrivers"><literal>services.xserver.videoDrivers</literal></link>
to set one. The recommended configuration for modern systems is:
<xref linkend="opt-services.xserver.videoDrivers" /> to set one.
The recommended configuration for modern systems is:
</para>
<programlisting language="bash">
services.xserver.videoDrivers = [ &quot;modesetting&quot; ];
@ -204,10 +204,8 @@ services.xserver.videoDrivers = [ &quot;amdgpu-pro&quot; ];
services.xserver.libinput.enable = true;
</programlisting>
<para>
The driver has many options (see
<link xlink:href="options.html">Appendix A, Configuration
Options</link>). For instance, the following disables tap-to-click
behavior:
The driver has many options (see <xref linkend="ch-options" />).
For instance, the following disables tap-to-click behavior:
</para>
<programlisting language="bash">
services.xserver.libinput.touchpad.tapping = false;

View file

@ -23,16 +23,16 @@ services.picom = {
<para>
Some Xfce programs are not installed automatically. To install them
manually (system wide), put them into your
<link xlink:href="options.html#opt-environment.systemPackages"><literal>environment.systemPackages</literal></link>
from <literal>pkgs.xfce</literal>.
<xref linkend="opt-environment.systemPackages" /> from
<literal>pkgs.xfce</literal>.
</para>
<section xml:id="sec-xfce-thunar-plugins">
<title>Thunar Plugins</title>
<para>
If you'd like to add extra plugins to Thunar, add them to
<link xlink:href="options.html#opt-services.xserver.desktopManager.xfce.thunarPlugins"><literal>services.xserver.desktopManager.xfce.thunarPlugins</literal></link>.
<xref linkend="opt-services.xserver.desktopManager.xfce.thunarPlugins" />.
You shouldn't just add them to
<link xlink:href="options.html#opt-environment.systemPackages"><literal>environment.systemPackages</literal></link>.
<xref linkend="opt-environment.systemPackages" />.
</para>
</section>
<section xml:id="sec-xfce-troubleshooting">