ZFS: Difference between revisions

From Alpine Linux
m (Prabuanand moved page ZFS scrub and trim to ZFS: This page will become the main page for ZFS related information)
(moved and added links, changed heading levels, reworded introduction paragraph)
Line 1: Line 1:
= Introduction =
[https://openzfs.org/wiki/Main_Page ZFS] or OpenZFS is an open-source storage platform. It includes the functionality of both traditional file systems and volume manager.


On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.<br>
== Configuration ==
Setting it up is easy and can be done in a few minutes.


= Scrub =
On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions. Setting it up is easy and can be done in a few minutes.


== Definition ==
=== Scrub ===
 
Scrubbing (and eventually trim) your pool(s) on a [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when Oracle regular basis] is essential. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub
 
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.


The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub<br>
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.<br>
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.


Also see [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when Oracle - Disk Scrub - Why and When?]
==== Creating the script ====
 
== Creating the script ==


This script is used to list the pools, make sure they are online, and no scrub is being done at the time.<br>
This script is used to list the pools, make sure they are online, and no scrub is being done at the time.<br>
Line 71: Line 70:
{{cmd|# chmod +x /usr/libexec/zfs/scrub}}
{{cmd|# chmod +x /usr/libexec/zfs/scrub}}


== Launching the scrub script with cron ==
==== Launching the scrub script with cron ====


It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.<br>
It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.<br>
Line 97: Line 96:
{{cmd|# rc-service crond start}}
{{cmd|# rc-service crond start}}


= Trim =
=== Trim ===


== Definition ==
The command {{ic|zpool trim}} initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.


Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.<br>
A manual on-demand TRIM operation can be initiated irrespective of the [https://openzfs.github.io/openzfs-docs/man/v2.2/7/zpoolprops.7.html#autotrim autotrim] pool property setting. The link shows the types of vdev devices which can be trimmed.
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting.
See the [https://openzfs.github.io/openzfs-docs/man/v2.2/7/zpoolprops.7.html#autotrim documentation for the autotrim property] for the types of vdev devices which can be trimmed.


== Creating the script ==
==== Creating the script ====


This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.<br>
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.<br>
Line 177: Line 174:
{{cmd|# chmod +x /usr/libexec/zfs/trim}}
{{cmd|# chmod +x /usr/libexec/zfs/trim}}


== Launching the trim script with cron ==
==== Launching the trim script with cron ====


Here, the trim will be launched once a month, on the 1st sunday of the month.
Here, the trim will be launched once a month, on the 1st sunday of the month.
Line 202: Line 199:
{{cmd|# rc-service crond start}}
{{cmd|# rc-service crond start}}


== See also ==
* [https://openzfs.org/wiki/System_Administration ZFS System Administration]
* [[Root on ZFS with native encryption]]
* [[Setting up ZFS on LUKS]]
[[Category:Filesystems]]
[[Category:Filesystems]]

Revision as of 07:26, 2 April 2025

ZFS or OpenZFS is an open-source storage platform. It includes the functionality of both traditional file systems and volume manager.

Configuration

On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions. Setting it up is easy and can be done in a few minutes.

Scrub

Scrubbing (and eventually trim) your pool(s) on a Oracle regular basis is essential. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub

When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.

A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.

Creating the script

This script is used to list the pools, make sure they are online, and no scrub is being done at the time.
We will write it in /usr/libexec/zfs/scrub
It's taken from debian zfs scripts

Contents of /usr/libexec/zfs/scrub

#!/bin/sh -eu # directly exit successfully when zfs module is not loaded if ! [ -d /sys/module/zfs ]; then exit 0 fi # [auto] / enable / disable PROPERTY_NAME="org.alpine:periodic-scrub" get_property () { # Detect the ${PROPERTY_NAME} property on a given pool. # We are abusing user-defined properties on the root dataset, # since they're not available on pools https://github.com/openzfs/zfs/pull/11680 # TODO: use zpool user-defined property when such feature is available. pool="$1" zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 } scrub_if_not_scrub_in_progress () { pool="$1" if ! zpool status "${pool}" | grep -q "scrub in progress"; then # Ignore errors and continue with scrubbing other pools. zpool scrub "${pool}" || true fi } # Scrub all healthy pools that are not already scrubbing as per their configs. zpool list -H -o health,name 2>&1 | \ awk -F'\t' '$1 == "ONLINE" {print $2}' | \ while read pool do # read user-defined config ret=$(get_property "${pool}") if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then  : elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ] || [ "enable" = "${ret}" ]; then scrub_if_not_scrub_in_progress "${pool}" else cat > /dev/stderr <<EOF $0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". $0: Acceptable choices for this property are: auto, enable, disable. The default is auto. EOF fi done

Then make the script executable

# chmod +x /usr/libexec/zfs/scrub

Launching the scrub script with cron

It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.
Here, the scrub will be launched once a month, on the 2nd sunday of the month.

In root, edit your crontabs

# crontab -e

and add these 2 lines

# zfs scrub the second sunday of every month
24      0       8-14    *       *       if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi

Finally, make sure cron is launched:

# rc-update

There should be a line saying

crond |      default

If not, add it to the boot sequence

# rc-update add crond

then start the crond daemon

# rc-service crond start

Trim

The command zpool trim initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.

A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. The link shows the types of vdev devices which can be trimmed.

Creating the script

This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.
We will write it in /usr/libexec/zfs/trim
It's taken from debian zfs scripts

Contents of /usr/libexec/zfs/trim

#!/bin/sh -eu # directly exit successfully when zfs module is not loaded if ! [ -d /sys/module/zfs ]; then exit 0 fi # [auto] / enable / disable PROPERTY_NAME="org.alpine:periodic-trim" get_property () { # Detect the ${PROPERTY_NAME} property on a given pool. # We are abusing user-defined properties on the root dataset, # since they're not available on pools https://github.com/openzfs/zfs/pull/11680 # TODO: use zpool user-defined property when such feature is available. pool="$1" zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 } trim_if_not_already_trimming () { pool="$1" if ! zpool status "${pool}" | grep -q "trimming"; then # Ignore errors (i.e. HDD pools), # and continue with trimming other pools. zpool trim "${pool}" || true fi } zpool_is_nvme_only () { zpool=$1 # get a list of devices attached to the specified zpool zpool list -vHPL "${zpool}" | awk -F'\t' '$2 ~ /^\/dev\// { if($2 !~ /^\/dev\/nvme/) exit 1 }' } # TRIM all healthy pools that are not already trimming as per their configs. zpool list -H -o health,name 2>&1 | \ awk -F'\t' '$1 == "ONLINE" {print $2}' | \ while read pool do # read user-defined config ret=$(get_property "${pool}") if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then  : elif [ "enable" = "${ret}" ]; then trim_if_not_already_trimming "${pool}" elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ]; then if zpool_is_nvme_only "${pool}"; then trim_if_not_already_trimming "${pool}" fi else cat > /dev/stderr <<EOF $0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". $0: Acceptable choices for this property are: auto, enable, disable. The default is auto. EOF fi done

Then make the script executable

# chmod +x /usr/libexec/zfs/trim

Launching the trim script with cron

Here, the trim will be launched once a month, on the 1st sunday of the month.

In root, edit your crontabs

# crontab -e

and add these 2 lines

# zfs trim the first sunday of every month
24      0       1-7    *       *       if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi

Finally, make sure cron is launched:

# rc-update

There should be a line saying

crond |      default

If not, add it to the boot sequence

# rc-update add crond

then start the crond daemon

# rc-service crond start

See also