ZFS: Difference between revisions

From Alpine Linux
(use cmd path & cat template)
(Style/grammar amendments.)
 
(20 intermediate revisions by 7 users not shown)
Line 1: Line 1:
= Introduction =
[https://openzfs.org/wiki/Main_Page ZFS] or OpenZFS is an open-source storage platform. It includes the functionality of both traditional file systems and volume manager.


On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.<br>
This page has instructions for creating and automounting an encrypted ZFS drive or partition on an existing encrypted Alpine Linux system using ZFS's own encryption capabilities. To do a fresh install of Alpine Linux with root partition on ZFS, see [[Root on ZFS with native encryption]].
Setting it up is easy and can be done in a few minutes.


= Scrub =
== Installation ==


== Definition ==
Install the necessary packages and utilities by using the command:{{Cmd|# apk add zfs}}
Ensure that the kernel modules are loaded and verify that the device nodes are present:{{Cmd|<nowiki># modprobe zfs       
# mdev -s </nowiki> }}
If your use of ZFS depends on volumes appearing in {{Path|/dev/zvol/$ZPOOL/$ZVOL}}, you will also need to [[Eudev|setup eudev]] and:{{Cmd|# apk add zfs-udev}}


The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub<br>
== Configuration ==
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.<br>
 
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.
The system will be encrypted when powered off but will not require you to type an extra password at boot, since it uses a key stored on the encrypted root partition. Alternative options are also given, such as prompting for a password at boot rather than storing the key on the root drive. The example in this guide is modeled around creating a ZFS filesystem to be used as a user's home directory, but it can be trivially modified to create a filesystem for other purposes.
 
=== Create an encryption key ===
 
This section can be skipped if you intend to unlock the drive by typing a password rather than unlocking automatically. You should use a password instead if your root partition is not encrypted. The location of the file {{Path|/etc/home.key}} can be anywhere. {{Cmd|<nowiki># dd if=/dev/random of=/etc/home.key bs=32 count=1
# chmod 600 /etc/home.key</nowiki>}}
 
IMPORTANT: Make sure you don't lose this key by overwriting your root filesystem or similar. You might want to store a copy of it on an encrypted USB drive, for instance.
 
=== Create the zpool ===
 
Replace <code>/dev/sd...</code> with the name of the disk or partition where you would like to make the zfs filesystem, such as <code>/dev/nvme0n1</code> or <code>/dev/sda1</code>. If you would like to be prompted for a password at boot rather than using the key as generated above, then replace <code>-O keylocation=file:///etc/home.key -O keyformat=raw</code> with <code>-O keylocation=prompt -O keyformat=passphrase</code>. The name <var>"homepool"</var> can be anything.{{Cmd|<nowiki># zpool create -o ashift=12 -O acltype=posixacl -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=file:///etc/home.key -O keyformat=raw \
-O mountpoint=none homepool /dev/sd...</nowiki>}}
 
After completing this, verifying that the pool has been created should return something similar to the following: {{Cmd|<nowiki># zpool status
      pool: homepool
    state: ONLINE
    config:
            NAME        STATE    READ WRITE CKSUM
            homepool    ONLINE      0    0    0
              sd...    ONLINE      0    0    0
   
    errors: No known data errors</nowiki>}}
 
To create and mount the filesystem, issue the command {{Cmd|<nowiki># zfs create -o mountpoint=/home/username homepool/username
# chown username:username /home/username </nowiki>}}
 
The last command i.e ''chown'' is likely unnecessary if not creating a homedir.
 
=== Service configuration ===
 
Setup the following services to automount the new filesystem using [[OpenRC]].
To import existing zpools:{{Cmd|# rc-update add zfs-import}}   
To load the encryption keys:{{Cmd|# rc-update add zfs-load-key}}
Finally, to mount the filesystems:{{Cmd|# rc-update add zfs-mount}}   
 
Reboot the system so that the encrypted ZFS drive is automounted and becomes available for use.
 
== Maintenance ==
 
On Alpine Linux, there is no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.
 
=== Scrub ===


Also see [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when Oracle - Disk Scrub - Why and When?]
Scrubbing (and eventually trim) your pool(s) on a regular basis is essential, as discussed in [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when an official Oracle blog post] on Exadata disk scrubbing. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub.


== Creating the script ==
When scrubbing a pool with encrypted filesystems, the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.


This script is used to list the pools, make sure they are online, and no scrub is being done at the time.<br>
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.
We will write it in {{path|/usr/libexec/zfs/scrub}}<br>
It's taken from debian zfs scripts


{{cat|/usr/libexec/zfs/scrub|#!/bin/sh -eu
This script is taken from Debian ZFS scripts. It is used to list the pools, making sure that they are online and that no scrub is being done at the time. The contents of the script placed at {{path|/usr/libexec/zfs/scrub}} are as follows:{{cat|/usr/libexec/zfs/scrub|<nowiki>#!/bin/sh -eu


# directly exit successfully when zfs module is not loaded
# directly exit successfully when zfs module is not loaded
Line 65: Line 110:
         fi
         fi
done
done
}}
</nowiki>}}
 
Then make the script executable
{{cmd|# chmod +x /usr/libexec/zfs/scrub}}


== Launching the scrub script with cron ==
Make the script executable: {{cmd|# chmod +x /usr/libexec/zfs/scrub}}


It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.<br>
It is recommended to scrub regularly to ensure that your pool(s) and data are in good shape. In our example, the scrub script will be launched once a month, on the 2nd Sunday of the month using [[cron]].
Here, the scrub will be launched once a month, on the 2nd sunday of the month.


In root, edit your crontabs
Edit your crontabs using the command: {{cmd|# crontab -e}}
{{cmd|# crontab -e}}


and add these 2 lines
and add these 2 lines:
<pre>
<pre>
# zfs scrub the second sunday of every month
# ZFS scrub the 2nd Sunday of every month
24      0      8-14    *      *      if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi
24      0      8-14    *      *      if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi
</pre>
</pre>


Finally, make sure cron is launched:
=== Trim ===
{{cmd|# rc-update}}
 
There should be a line saying
<pre>crond |      default</pre>
 
If not, add it to the boot sequence
{{cmd|# rc-update add crond}}
 
then start the crond daemon
{{cmd|# rc-service crond start}}


= Trim =
The command {{ic|zpool trim}} initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.


== Definition ==
A manual on-demand TRIM operation can be initiated irrespective of the [https://openzfs.github.io/openzfs-docs/man/v2.2/7/zpoolprops.7.html#autotrim autotrim] pool property setting. The link shows the types of VDEV devices that can be trimmed.


Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.<br>
This script is taken from Debian ZFS scripts. It is used to list the pools, making sure that they are online, built only with NVMe SSD drive(s) and that no trim is being done at the time. The contents of the script placed at {{path|/usr/libexec/zfs/trim}} are as follows:{{cat|/usr/libexec/zfs/trim|<nowiki>#!/bin/sh -eu
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.
 
== Creating the script ==
 
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.<br>
We will write it in {{path|/usr/libexec/zfs/trim}}<br>
It's taken from debian zfs scripts
 
{{cat|/usr/libexec/zfs/trim|#/bin/sh -eu


# directly exit successfully when zfs module is not loaded
# directly exit successfully when zfs module is not loaded
Line 169: Line 190:
         fi
         fi
done
done
</nowiki>
}}
}}


Then make the script executable
Make the script executable: {{cmd|# chmod +x /usr/libexec/zfs/trim}}
{{cmd|# chmod +x /usr/libexec/zfs/trim}}
 
== Launching the scrub script with cron ==


Here, the scrub will be launched once a month, on the 1st sunday of the month.
It is recommended to trim regularly to ensure that your pool(s) and data are in good shape. In our example, the trim script will be launched once a month, on the 1st Sunday of the month using [[cron]].


In root, edit your crontabs
Edit your crontabs using the command: {{cmd|# crontab -e}}
{{cmd|# crontab -e}}
and add these 2 lines:
 
and add these 2 lines
<pre>
<pre>
# zfs trim the first sunday of every month
# ZFS trim the 1st Sunday of every month
24      0      1-7    *      *      if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi
24      0      1-7    *      *      if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi
</pre>
</pre>


Finally, make sure cron is launched:
== See also ==
{{cmd|# rc-update}}
 
There should be a line saying
<pre>crond |      default</pre>
 
If not, add it to the boot sequence
{{cmd|# rc-update add crond}}
 
then start the crond daemon
{{cmd|# rc-service crond start}}


[[Category:File systems]]
* [https://openzfs.org/wiki/System_Administration ZFS System Administration]
* [https://openzfs.github.io/openzfs-docs/Getting%20Started/Alpine%20Linux/Root%20on%20ZFS.html OpenZFS Guide for Alpine Linux]
* [[Root on ZFS with native encryption]]
* [[Setting up ZFS on LUKS]]
[[Category:Filesystems]]
[[Category:Storage]]
[[Category:Installation]]
[[Category:Security]]

Latest revision as of 01:36, 23 April 2026

ZFS or OpenZFS is an open-source storage platform. It includes the functionality of both traditional file systems and volume manager.

This page has instructions for creating and automounting an encrypted ZFS drive or partition on an existing encrypted Alpine Linux system using ZFS's own encryption capabilities. To do a fresh install of Alpine Linux with root partition on ZFS, see Root on ZFS with native encryption.

Installation

Install the necessary packages and utilities by using the command:

# apk add zfs

Ensure that the kernel modules are loaded and verify that the device nodes are present:

# modprobe zfs # mdev -s

If your use of ZFS depends on volumes appearing in /dev/zvol/$ZPOOL/$ZVOL, you will also need to setup eudev and:

# apk add zfs-udev

Configuration

The system will be encrypted when powered off but will not require you to type an extra password at boot, since it uses a key stored on the encrypted root partition. Alternative options are also given, such as prompting for a password at boot rather than storing the key on the root drive. The example in this guide is modeled around creating a ZFS filesystem to be used as a user's home directory, but it can be trivially modified to create a filesystem for other purposes.

Create an encryption key

This section can be skipped if you intend to unlock the drive by typing a password rather than unlocking automatically. You should use a password instead if your root partition is not encrypted. The location of the file /etc/home.key can be anywhere.

# dd if=/dev/random of=/etc/home.key bs=32 count=1 # chmod 600 /etc/home.key

IMPORTANT: Make sure you don't lose this key by overwriting your root filesystem or similar. You might want to store a copy of it on an encrypted USB drive, for instance.

Create the zpool

Replace /dev/sd... with the name of the disk or partition where you would like to make the zfs filesystem, such as /dev/nvme0n1 or /dev/sda1. If you would like to be prompted for a password at boot rather than using the key as generated above, then replace -O keylocation=file:///etc/home.key -O keyformat=raw with -O keylocation=prompt -O keyformat=passphrase. The name "homepool" can be anything.

# zpool create -o ashift=12 -O acltype=posixacl -O compression=lz4 \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O encryption=aes-256-gcm -O keylocation=file:///etc/home.key -O keyformat=raw \ -O mountpoint=none homepool /dev/sd...

After completing this, verifying that the pool has been created should return something similar to the following:

# zpool status pool: homepool state: ONLINE config: NAME STATE READ WRITE CKSUM homepool ONLINE 0 0 0 sd... ONLINE 0 0 0 errors: No known data errors

To create and mount the filesystem, issue the command

# zfs create -o mountpoint=/home/username homepool/username # chown username:username /home/username

The last command i.e chown is likely unnecessary if not creating a homedir.

Service configuration

Setup the following services to automount the new filesystem using OpenRC.

To import existing zpools:

# rc-update add zfs-import

To load the encryption keys:

# rc-update add zfs-load-key

Finally, to mount the filesystems:

# rc-update add zfs-mount

Reboot the system so that the encrypted ZFS drive is automounted and becomes available for use.

Maintenance

On Alpine Linux, there is no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.

Scrub

Scrubbing (and eventually trim) your pool(s) on a regular basis is essential, as discussed in an official Oracle blog post on Exadata disk scrubbing. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub.

When scrubbing a pool with encrypted filesystems, the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.

A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.

This script is taken from Debian ZFS scripts. It is used to list the pools, making sure that they are online and that no scrub is being done at the time. The contents of the script placed at /usr/libexec/zfs/scrub are as follows:

Contents of /usr/libexec/zfs/scrub

#!/bin/sh -eu # directly exit successfully when zfs module is not loaded if ! [ -d /sys/module/zfs ]; then exit 0 fi # [auto] / enable / disable PROPERTY_NAME="org.alpine:periodic-scrub" get_property () { # Detect the ${PROPERTY_NAME} property on a given pool. # We are abusing user-defined properties on the root dataset, # since they're not available on pools https://github.com/openzfs/zfs/pull/11680 # TODO: use zpool user-defined property when such feature is available. pool="$1" zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 } scrub_if_not_scrub_in_progress () { pool="$1" if ! zpool status "${pool}" | grep -q "scrub in progress"; then # Ignore errors and continue with scrubbing other pools. zpool scrub "${pool}" || true fi } # Scrub all healthy pools that are not already scrubbing as per their configs. zpool list -H -o health,name 2>&1 | \ awk -F'\t' '$1 == "ONLINE" {print $2}' | \ while read pool do # read user-defined config ret=$(get_property "${pool}") if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then  : elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ] || [ "enable" = "${ret}" ]; then scrub_if_not_scrub_in_progress "${pool}" else cat > /dev/stderr <<EOF $0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". $0: Acceptable choices for this property are: auto, enable, disable. The default is auto. EOF fi done

Make the script executable:

# chmod +x /usr/libexec/zfs/scrub

It is recommended to scrub regularly to ensure that your pool(s) and data are in good shape. In our example, the scrub script will be launched once a month, on the 2nd Sunday of the month using cron.

Edit your crontabs using the command:

# crontab -e

and add these 2 lines:

# ZFS scrub the 2nd Sunday of every month
24      0       8-14    *       *       if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi

Trim

The command zpool trim initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.

A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. The link shows the types of VDEV devices that can be trimmed.

This script is taken from Debian ZFS scripts. It is used to list the pools, making sure that they are online, built only with NVMe SSD drive(s) and that no trim is being done at the time. The contents of the script placed at /usr/libexec/zfs/trim are as follows:

Contents of /usr/libexec/zfs/trim

#!/bin/sh -eu # directly exit successfully when zfs module is not loaded if ! [ -d /sys/module/zfs ]; then exit 0 fi # [auto] / enable / disable PROPERTY_NAME="org.alpine:periodic-trim" get_property () { # Detect the ${PROPERTY_NAME} property on a given pool. # We are abusing user-defined properties on the root dataset, # since they're not available on pools https://github.com/openzfs/zfs/pull/11680 # TODO: use zpool user-defined property when such feature is available. pool="$1" zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 } trim_if_not_already_trimming () { pool="$1" if ! zpool status "${pool}" | grep -q "trimming"; then # Ignore errors (i.e. HDD pools), # and continue with trimming other pools. zpool trim "${pool}" || true fi } zpool_is_nvme_only () { zpool=$1 # get a list of devices attached to the specified zpool zpool list -vHPL "${zpool}" | awk -F'\t' '$2 ~ /^\/dev\// { if($2 !~ /^\/dev\/nvme/) exit 1 }' } # TRIM all healthy pools that are not already trimming as per their configs. zpool list -H -o health,name 2>&1 | \ awk -F'\t' '$1 == "ONLINE" {print $2}' | \ while read pool do # read user-defined config ret=$(get_property "${pool}") if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then  : elif [ "enable" = "${ret}" ]; then trim_if_not_already_trimming "${pool}" elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ]; then if zpool_is_nvme_only "${pool}"; then trim_if_not_already_trimming "${pool}" fi else cat > /dev/stderr <<EOF $0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". $0: Acceptable choices for this property are: auto, enable, disable. The default is auto. EOF fi done

Make the script executable:

# chmod +x /usr/libexec/zfs/trim

It is recommended to trim regularly to ensure that your pool(s) and data are in good shape. In our example, the trim script will be launched once a month, on the 1st Sunday of the month using cron.

Edit your crontabs using the command:

# crontab -e

and add these 2 lines:

# ZFS trim the 1st Sunday of every month
24      0       1-7    *       *       if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi

See also