ZFS scrub and trim: Difference between revisions
(Created page with "= Introduction = On alpine linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other linux distributions.<br> Setting it up is easy and can be done in a few minutes. = Scrub = == Definition == {{Note|The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub<br> Wh...") |
(added the trim part, corrected a few typos) |
||
Line 1: | Line 1: | ||
= Introduction = | = Introduction = | ||
On alpine | On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.<br> | ||
Setting it up is easy and can be done in a few minutes. | Setting it up is easy and can be done in a few minutes. | ||
Line 8: | Line 8: | ||
== Definition == | == Definition == | ||
The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub<br> | |||
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.<br> | When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.<br> | ||
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O. | A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O. | ||
== Creating the script == | == Creating the script == | ||
Line 71: | Line 71: | ||
== Launching the scrub script with cron == | == Launching the scrub script with cron == | ||
It is | It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.<br> | ||
Here, the scrub will be launched once a month, on the 2nd sunday of the month. | Here, the scrub will be launched once a month, on the 2nd sunday of the month. | ||
Line 86: | Line 86: | ||
# rc-update | # rc-update | ||
There should be a | There should be a line saying | ||
<pre>crond | default</pre> | <pre>crond | default</pre> | ||
Line 96: | Line 96: | ||
= Trim = | = Trim = | ||
== Definition == | |||
Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.<br> | |||
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed. | |||
== Creating the script == | |||
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.<br> | |||
We will write it in '''/usr/libexec/zfs/trim'''<br> | |||
It's taken from debian zfs scripts | |||
<pre> | |||
!/bin/sh -eu | |||
# directly exit successfully when zfs module is not loaded | |||
if ! [ -d /sys/module/zfs ]; then | |||
exit 0 | |||
fi | |||
# [auto] / enable / disable | |||
PROPERTY_NAME="org.alpine:periodic-trim" | |||
get_property () { | |||
# Detect the ${PROPERTY_NAME} property on a given pool. | |||
# We are abusing user-defined properties on the root dataset, | |||
# since they're not available on pools https://github.com/openzfs/zfs/pull/11680 | |||
# TODO: use zpool user-defined property when such feature is available. | |||
pool="$1" | |||
zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 | |||
} | |||
trim_if_not_already_trimming () { | |||
pool="$1" | |||
if ! zpool status "${pool}" | grep -q "trimming"; then | |||
# Ignore errors (i.e. HDD pools), | |||
# and continue with trimming other pools. | |||
zpool trim "${pool}" || true | |||
fi | |||
} | |||
zpool_is_nvme_only () { | |||
zpool=$1 | |||
# get a list of devices attached to the specified zpool | |||
zpool list -vHPL "${zpool}" | | |||
awk -F'\t' '$2 ~ /^\/dev\// { | |||
if($2 !~ /^\/dev\/nvme/) | |||
exit 1 | |||
}' | |||
} | |||
# TRIM all healthy pools that are not already trimming as per their configs. | |||
zpool list -H -o health,name 2>&1 | \ | |||
awk -F'\t' '$1 == "ONLINE" {print $2}' | \ | |||
while read pool | |||
do | |||
# read user-defined config | |||
ret=$(get_property "${pool}") | |||
if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then | |||
: | |||
elif [ "enable" = "${ret}" ]; then | |||
trim_if_not_already_trimming "${pool}" | |||
elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ]; then | |||
if zpool_is_nvme_only "${pool}"; then | |||
trim_if_not_already_trimming "${pool}" | |||
fi | |||
else | |||
cat > /dev/stderr <<EOF | |||
$0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". | |||
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto. | |||
EOF | |||
fi | |||
done | |||
</pre> | |||
Then make the script executable | |||
# chmod +x /usr/libexec/zfs/trim | |||
== Launching the scrub script with cron == | |||
Here, the scrub will be launched once a month, on the 1st sunday of the month. | |||
In root, edit your crontabs | |||
# crontab -e | |||
and add these 2 lines | |||
<pre> | |||
# zfs trim the first sunday of every month | |||
24 0 1-7 * * if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi | |||
</pre> | |||
Finally, make sure cron is launched: | |||
# rc-update | |||
There should be a line saying | |||
<pre>crond | default</pre> | |||
If not, add it to the boot sequence | |||
# rc-update add crond | |||
then start the crond daemon | |||
# rc-service crond start |
Revision as of 22:17, 24 August 2023
Introduction
On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.
Setting it up is easy and can be done in a few minutes.
Scrub
Definition
The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.
Creating the script
This script is used to list the pools, make sure they are online, and no scrub is being done at the time.
We will write it in /usr/libexec/zfs/scrub
It's taken from debian zfs scripts
#!/bin/sh -eu # directly exit successfully when zfs module is not loaded if ! [ -d /sys/module/zfs ]; then exit 0 fi # [auto] / enable / disable PROPERTY_NAME="org.alpine:periodic-scrub" get_property () { # Detect the ${PROPERTY_NAME} property on a given pool. # We are abusing user-defined properties on the root dataset, # since they're not available on pools https://github.com/openzfs/zfs/pull/11680 # TODO: use zpool user-defined property when such feature is available. pool="$1" zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 } scrub_if_not_scrub_in_progress () { pool="$1" if ! zpool status "${pool}" | grep -q "scrub in progress"; then # Ignore errors and continue with scrubbing other pools. zpool scrub "${pool}" || true fi } # Scrub all healthy pools that are not already scrubbing as per their configs. zpool list -H -o health,name 2>&1 | \ awk -F'\t' '$1 == "ONLINE" {print $2}' | \ while read pool do # read user-defined config ret=$(get_property "${pool}") if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then : elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ] || [ "enable" = "${ret}" ]; then scrub_if_not_scrub_in_progress "${pool}" else cat > /dev/stderr <<EOF $0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". $0: Acceptable choices for this property are: auto, enable, disable. The default is auto. EOF fi done
Then make the script executable
# chmod +x /usr/libexec/zfs/script
Launching the scrub script with cron
It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.
Here, the scrub will be launched once a month, on the 2nd sunday of the month.
In root, edit your crontabs
# crontab -e
and add these 2 lines
# zfs scrub the second sunday of every month 24 0 8-14 * * if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi
Finally, make sure cron is launched:
# rc-update
There should be a line saying
crond | default
If not, add it to the boot sequence
# rc-update add crond
then start the crond daemon
# rc-service crond start
Trim
Definition
Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.
Creating the script
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.
We will write it in /usr/libexec/zfs/trim
It's taken from debian zfs scripts
!/bin/sh -eu # directly exit successfully when zfs module is not loaded if ! [ -d /sys/module/zfs ]; then exit 0 fi # [auto] / enable / disable PROPERTY_NAME="org.alpine:periodic-trim" get_property () { # Detect the ${PROPERTY_NAME} property on a given pool. # We are abusing user-defined properties on the root dataset, # since they're not available on pools https://github.com/openzfs/zfs/pull/11680 # TODO: use zpool user-defined property when such feature is available. pool="$1" zfs get -H -o value "${PROPERTY_NAME}" "${pool}" 2>/dev/null || return 1 } trim_if_not_already_trimming () { pool="$1" if ! zpool status "${pool}" | grep -q "trimming"; then # Ignore errors (i.e. HDD pools), # and continue with trimming other pools. zpool trim "${pool}" || true fi } zpool_is_nvme_only () { zpool=$1 # get a list of devices attached to the specified zpool zpool list -vHPL "${zpool}" | awk -F'\t' '$2 ~ /^\/dev\// { if($2 !~ /^\/dev\/nvme/) exit 1 }' } # TRIM all healthy pools that are not already trimming as per their configs. zpool list -H -o health,name 2>&1 | \ awk -F'\t' '$1 == "ONLINE" {print $2}' | \ while read pool do # read user-defined config ret=$(get_property "${pool}") if [ $? -ne 0 ] || [ "disable" = "${ret}" ]; then : elif [ "enable" = "${ret}" ]; then trim_if_not_already_trimming "${pool}" elif [ "-" = "${ret}" ] || [ "auto" = "${ret}" ]; then if zpool_is_nvme_only "${pool}"; then trim_if_not_already_trimming "${pool}" fi else cat > /dev/stderr <<EOF $0: [WARNING] illegal value "${ret}" for property "${PROPERTY_NAME}" of ZFS dataset "${pool}". $0: Acceptable choices for this property are: auto, enable, disable. The default is auto. EOF fi done
Then make the script executable
# chmod +x /usr/libexec/zfs/trim
Launching the scrub script with cron
Here, the scrub will be launched once a month, on the 1st sunday of the month.
In root, edit your crontabs
# crontab -e
and add these 2 lines
# zfs trim the first sunday of every month 24 0 1-7 * * if [ $(date +\%w) -eq 0 ] && [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi
Finally, make sure cron is launched:
# rc-update
There should be a line saying
crond | default
If not, add it to the boot sequence
# rc-update add crond
then start the crond daemon
# rc-service crond start