<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cyclisme24</id>
	<title>Alpine Linux - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cyclisme24"/>
	<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/wiki/Special:Contributions/Cyclisme24"/>
	<updated>2026-05-01T05:54:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.40.0</generator>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Suspend_on_LID_close&amp;diff=26860</id>
		<title>Suspend on LID close</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Suspend_on_LID_close&amp;diff=26860"/>
		<updated>2024-06-26T16:44:15Z</updated>

		<summary type="html">&lt;p&gt;Cyclisme24: /* Busybox acpid */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article explains how to make your laptop go to Suspend when closing the LID.&lt;br /&gt;
&lt;br /&gt;
== acpid2 ==&lt;br /&gt;
&lt;br /&gt;
[https://sourceforge.net/projects/acpid2/ acpid2] (provided by package {{pkg|acpid}}) is a flexible and extensible daemon with Netlink support for delivering ACPI events.&lt;br /&gt;
The [https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/community/acpid/handler.sh default handler script] ({{path|/etc/acpi/handler.sh}}) installed with the package provides support for suspend on LID close out of the box.&lt;br /&gt;
We recommend installing {{pkg|zzz}} along with {{pkg|acpid}} to get support for pre/post suspend hooks etc.&lt;br /&gt;
&lt;br /&gt;
# Install {{pkg|acpid}} and {{pkg|zzz}}: {{cmd|# apk add acpid zzz}}&lt;br /&gt;
# Enable and start the acpid daemon: {{cmd|# rc-update add acpid &amp;amp;&amp;amp; rc-service acpid start}}&lt;br /&gt;
&lt;br /&gt;
== Busybox acpid ==&lt;br /&gt;
&lt;br /&gt;
This can be done via [[Busybox acpid]] with a hook in {{path|/etc/acpi/LID/00000080}}:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;with zzz:&#039;&#039;&#039;{{cmd|# apk add zzz}} {{cat|/etc/acpi/LID/00000080|#!/bin/sh&lt;br /&gt;
exec zzz&lt;br /&gt;
}}&lt;br /&gt;
# &#039;&#039;&#039;or with pm-utils:&#039;&#039;&#039; {{cmd|# apk add pm-utils}} {{cat|/etc/acpi/LID/00000080|#!/bin/sh&lt;br /&gt;
exec pm-suspend&lt;br /&gt;
}}&lt;br /&gt;
# &#039;&#039;&#039;or with raw variant&#039;&#039;&#039;: {{cat|/etc/acpi/LID/00000080|#!/bin/sh&lt;br /&gt;
echo mem &amp;gt; /sys/power/state&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Make the hook executable:&lt;br /&gt;
{{cmd|# chmod +x /etc/acpi/LID/00000080}}&lt;br /&gt;
&lt;br /&gt;
That should be it. To make sure that the acpid daemon is running, execute:&lt;br /&gt;
{{cmd|# rc-service acpid start}}&lt;br /&gt;
&lt;br /&gt;
== elogind ==&lt;br /&gt;
&lt;br /&gt;
Use elogind to trigger suspend and use doas to grant normal user such permissions.&lt;br /&gt;
&lt;br /&gt;
Install elogind:&lt;br /&gt;
&lt;br /&gt;
 apk add elogind&lt;br /&gt;
 rc-update add elogind&lt;br /&gt;
 rc-service elogind start&lt;br /&gt;
&lt;br /&gt;
Now suspend on lid close should be working as expected.&lt;br /&gt;
&lt;br /&gt;
For normal user to trigger suspend, install doas:&lt;br /&gt;
&lt;br /&gt;
 apk add doas&lt;br /&gt;
&lt;br /&gt;
Configure doas in &amp;lt;code&amp;gt;/etc/doas.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 permit nopass $USER as root cmd /bin/loginctl&lt;br /&gt;
&lt;br /&gt;
You should now be able to suspend the computer as a normal user, using the full path to executable.&lt;br /&gt;
&lt;br /&gt;
= See Also =&lt;br /&gt;
* [https://github.com/jirutka/zzz zzz]&lt;br /&gt;
* [https://unix.stackexchange.com/questions/484550/pm-suspend-vs-systemctl-suspend pm-suspend vs systemd...]&lt;br /&gt;
* [https://wiki.archlinux.org/index.php?title=Pm-utils&amp;amp;oldid=498864 Archwiki pm-utils (archived page)]&lt;br /&gt;
&lt;br /&gt;
[[Category:Power Management]]&lt;br /&gt;
[[category: Desktop]]&lt;/div&gt;</summary>
		<author><name>Cyclisme24</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=ZFS&amp;diff=25382</id>
		<title>ZFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=ZFS&amp;diff=25382"/>
		<updated>2023-10-24T16:06:18Z</updated>

		<summary type="html">&lt;p&gt;Cyclisme24: /* Creating the script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.&amp;lt;br&amp;gt;&lt;br /&gt;
Setting it up is easy and can be done in a few minutes.&lt;br /&gt;
&lt;br /&gt;
= Scrub =&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub&amp;lt;br&amp;gt;&lt;br /&gt;
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.&amp;lt;br&amp;gt;&lt;br /&gt;
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.&lt;br /&gt;
&lt;br /&gt;
Also see [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when Oracle - Disk Scrub - Why and When?]&lt;br /&gt;
&lt;br /&gt;
== Creating the script ==&lt;br /&gt;
&lt;br /&gt;
This script is used to list the pools, make sure they are online, and no scrub is being done at the time.&amp;lt;br&amp;gt;&lt;br /&gt;
We will write it in {{path|/usr/libexec/zfs/scrub}}&amp;lt;br&amp;gt;&lt;br /&gt;
It&#039;s taken from debian zfs scripts&lt;br /&gt;
&lt;br /&gt;
{{cat|/usr/libexec/zfs/scrub|&amp;lt;nowiki&amp;gt;#!/bin/sh -eu&lt;br /&gt;
&lt;br /&gt;
# directly exit successfully when zfs module is not loaded&lt;br /&gt;
if ! [ -d /sys/module/zfs ]; then&lt;br /&gt;
        exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# [auto] / enable / disable&lt;br /&gt;
PROPERTY_NAME=&amp;quot;org.alpine:periodic-scrub&amp;quot;&lt;br /&gt;
&lt;br /&gt;
get_property () {&lt;br /&gt;
        # Detect the ${PROPERTY_NAME} property on a given pool.&lt;br /&gt;
        # We are abusing user-defined properties on the root dataset,&lt;br /&gt;
        # since they&#039;re not available on pools https://github.com/openzfs/zfs/pull/11680&lt;br /&gt;
        # TODO: use zpool user-defined property when such feature is available.&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        zfs get -H -o value &amp;quot;${PROPERTY_NAME}&amp;quot; &amp;quot;${pool}&amp;quot; 2&amp;gt;/dev/null || return 1&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
scrub_if_not_scrub_in_progress () {&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        if ! zpool status &amp;quot;${pool}&amp;quot; | grep -q &amp;quot;scrub in progress&amp;quot;; then&lt;br /&gt;
                # Ignore errors and continue with scrubbing other pools.&lt;br /&gt;
                zpool scrub &amp;quot;${pool}&amp;quot; || true&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Scrub all healthy pools that are not already scrubbing as per their configs.&lt;br /&gt;
zpool list -H -o health,name 2&amp;gt;&amp;amp;1 | \&lt;br /&gt;
        awk -F&#039;\t&#039; &#039;$1 == &amp;quot;ONLINE&amp;quot; {print $2}&#039; | \&lt;br /&gt;
while read pool&lt;br /&gt;
do&lt;br /&gt;
        # read user-defined config&lt;br /&gt;
        ret=$(get_property &amp;quot;${pool}&amp;quot;)&lt;br /&gt;
        if [ $? -ne 0 ] || [ &amp;quot;disable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                :&lt;br /&gt;
        elif [ &amp;quot;-&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;auto&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;enable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                scrub_if_not_scrub_in_progress &amp;quot;${pool}&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
                cat &amp;gt; /dev/stderr &amp;lt;&amp;lt;EOF&lt;br /&gt;
$0: [WARNING] illegal value &amp;quot;${ret}&amp;quot; for property &amp;quot;${PROPERTY_NAME}&amp;quot; of ZFS dataset &amp;quot;${pool}&amp;quot;.&lt;br /&gt;
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto.&lt;br /&gt;
EOF&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Then make the script executable&lt;br /&gt;
{{cmd|# chmod +x /usr/libexec/zfs/scrub}}&lt;br /&gt;
&lt;br /&gt;
== Launching the scrub script with cron ==&lt;br /&gt;
&lt;br /&gt;
It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.&amp;lt;br&amp;gt;&lt;br /&gt;
Here, the scrub will be launched once a month, on the 2nd sunday of the month.&lt;br /&gt;
&lt;br /&gt;
In root, edit your crontabs&lt;br /&gt;
{{cmd|# crontab -e}}&lt;br /&gt;
&lt;br /&gt;
and add these 2 lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# zfs scrub the second sunday of every month&lt;br /&gt;
24      0       8-14    *       *       if [ $(date +\%w) -eq 0 ] &amp;amp;&amp;amp; [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, make sure cron is launched:&lt;br /&gt;
{{cmd|# rc-update}}&lt;br /&gt;
&lt;br /&gt;
There should be a line saying&lt;br /&gt;
&amp;lt;pre&amp;gt;crond |      default&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, add it to the boot sequence&lt;br /&gt;
{{cmd|# rc-update add crond}}&lt;br /&gt;
&lt;br /&gt;
then start the crond daemon&lt;br /&gt;
{{cmd|# rc-service crond start}}&lt;br /&gt;
&lt;br /&gt;
= Trim =&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.&amp;lt;br&amp;gt;&lt;br /&gt;
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.&lt;br /&gt;
&lt;br /&gt;
== Creating the script ==&lt;br /&gt;
&lt;br /&gt;
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.&amp;lt;br&amp;gt;&lt;br /&gt;
We will write it in {{path|/usr/libexec/zfs/trim}}&amp;lt;br&amp;gt;&lt;br /&gt;
It&#039;s taken from debian zfs scripts&lt;br /&gt;
&lt;br /&gt;
{{cat|/usr/libexec/zfs/trim|&amp;lt;nowiki&amp;gt;#!/bin/sh -eu&lt;br /&gt;
&lt;br /&gt;
# directly exit successfully when zfs module is not loaded&lt;br /&gt;
if ! [ -d /sys/module/zfs ]; then&lt;br /&gt;
        exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# [auto] / enable / disable&lt;br /&gt;
PROPERTY_NAME=&amp;quot;org.alpine:periodic-trim&amp;quot;&lt;br /&gt;
&lt;br /&gt;
get_property () {&lt;br /&gt;
        # Detect the ${PROPERTY_NAME} property on a given pool.&lt;br /&gt;
        # We are abusing user-defined properties on the root dataset,&lt;br /&gt;
        # since they&#039;re not available on pools https://github.com/openzfs/zfs/pull/11680&lt;br /&gt;
        # TODO: use zpool user-defined property when such feature is available.&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        zfs get -H -o value &amp;quot;${PROPERTY_NAME}&amp;quot; &amp;quot;${pool}&amp;quot; 2&amp;gt;/dev/null || return 1&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
trim_if_not_already_trimming () {&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        if ! zpool status &amp;quot;${pool}&amp;quot; | grep -q &amp;quot;trimming&amp;quot;; then&lt;br /&gt;
                # Ignore errors (i.e. HDD pools),&lt;br /&gt;
                # and continue with trimming other pools.&lt;br /&gt;
                zpool trim &amp;quot;${pool}&amp;quot; || true&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
zpool_is_nvme_only () {&lt;br /&gt;
        zpool=$1&lt;br /&gt;
        # get a list of devices attached to the specified zpool&lt;br /&gt;
        zpool list -vHPL &amp;quot;${zpool}&amp;quot; |&lt;br /&gt;
                awk -F&#039;\t&#039; &#039;$2 ~ /^\/dev\// {&lt;br /&gt;
                        if($2 !~ /^\/dev\/nvme/)&lt;br /&gt;
                                exit 1&lt;br /&gt;
                }&#039;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# TRIM all healthy pools that are not already trimming as per their configs.&lt;br /&gt;
zpool list -H -o health,name 2&amp;gt;&amp;amp;1 | \&lt;br /&gt;
        awk -F&#039;\t&#039; &#039;$1 == &amp;quot;ONLINE&amp;quot; {print $2}&#039; | \&lt;br /&gt;
while read pool&lt;br /&gt;
do&lt;br /&gt;
        # read user-defined config&lt;br /&gt;
        ret=$(get_property &amp;quot;${pool}&amp;quot;)&lt;br /&gt;
        if [ $? -ne 0 ] || [ &amp;quot;disable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                :&lt;br /&gt;
        elif [ &amp;quot;enable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                trim_if_not_already_trimming &amp;quot;${pool}&amp;quot;&lt;br /&gt;
        elif [ &amp;quot;-&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;auto&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                if zpool_is_nvme_only &amp;quot;${pool}&amp;quot;; then&lt;br /&gt;
                        trim_if_not_already_trimming &amp;quot;${pool}&amp;quot;&lt;br /&gt;
                fi&lt;br /&gt;
        else&lt;br /&gt;
                cat &amp;gt; /dev/stderr &amp;lt;&amp;lt;EOF&lt;br /&gt;
$0: [WARNING] illegal value &amp;quot;${ret}&amp;quot; for property &amp;quot;${PROPERTY_NAME}&amp;quot; of ZFS dataset &amp;quot;${pool}&amp;quot;.&lt;br /&gt;
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto.&lt;br /&gt;
EOF&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Then make the script executable&lt;br /&gt;
{{cmd|# chmod +x /usr/libexec/zfs/trim}}&lt;br /&gt;
&lt;br /&gt;
== Launching the scrub script with cron ==&lt;br /&gt;
&lt;br /&gt;
Here, the scrub will be launched once a month, on the 1st sunday of the month.&lt;br /&gt;
&lt;br /&gt;
In root, edit your crontabs&lt;br /&gt;
{{cmd|# crontab -e}}&lt;br /&gt;
&lt;br /&gt;
and add these 2 lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# zfs trim the first sunday of every month&lt;br /&gt;
24      0       1-7    *       *       if [ $(date +\%w) -eq 0 ] &amp;amp;&amp;amp; [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, make sure cron is launched:&lt;br /&gt;
{{cmd|# rc-update}}&lt;br /&gt;
&lt;br /&gt;
There should be a line saying&lt;br /&gt;
&amp;lt;pre&amp;gt;crond |      default&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, add it to the boot sequence&lt;br /&gt;
{{cmd|# rc-update add crond}}&lt;br /&gt;
&lt;br /&gt;
then start the crond daemon&lt;br /&gt;
{{cmd|# rc-service crond start}}&lt;br /&gt;
&lt;br /&gt;
[[Category:File systems]]&lt;/div&gt;</summary>
		<author><name>Cyclisme24</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=ZFS&amp;diff=25381</id>
		<title>ZFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=ZFS&amp;diff=25381"/>
		<updated>2023-10-24T15:48:30Z</updated>

		<summary type="html">&lt;p&gt;Cyclisme24: /* Creating the script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.&amp;lt;br&amp;gt;&lt;br /&gt;
Setting it up is easy and can be done in a few minutes.&lt;br /&gt;
&lt;br /&gt;
= Scrub =&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub&amp;lt;br&amp;gt;&lt;br /&gt;
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.&amp;lt;br&amp;gt;&lt;br /&gt;
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.&lt;br /&gt;
&lt;br /&gt;
Also see [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when Oracle - Disk Scrub - Why and When?]&lt;br /&gt;
&lt;br /&gt;
== Creating the script ==&lt;br /&gt;
&lt;br /&gt;
This script is used to list the pools, make sure they are online, and no scrub is being done at the time.&amp;lt;br&amp;gt;&lt;br /&gt;
We will write it in {{path|/usr/libexec/zfs/scrub}}&amp;lt;br&amp;gt;&lt;br /&gt;
It&#039;s taken from debian zfs scripts&lt;br /&gt;
&lt;br /&gt;
{{cat|/usr/libexec/zfs/scrub|&amp;lt;nowiki&amp;gt;#!/bin/sh -eu&lt;br /&gt;
&lt;br /&gt;
# directly exit successfully when zfs module is not loaded&lt;br /&gt;
if ! [ -d /sys/module/zfs ]; then&lt;br /&gt;
        exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# [auto] / enable / disable&lt;br /&gt;
PROPERTY_NAME=&amp;quot;org.alpine:periodic-scrub&amp;quot;&lt;br /&gt;
&lt;br /&gt;
get_property () {&lt;br /&gt;
        # Detect the ${PROPERTY_NAME} property on a given pool.&lt;br /&gt;
        # We are abusing user-defined properties on the root dataset,&lt;br /&gt;
        # since they&#039;re not available on pools https://github.com/openzfs/zfs/pull/11680&lt;br /&gt;
        # TODO: use zpool user-defined property when such feature is available.&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        zfs get -H -o value &amp;quot;${PROPERTY_NAME}&amp;quot; &amp;quot;${pool}&amp;quot; 2&amp;gt;/dev/null || return 1&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
scrub_if_not_scrub_in_progress () {&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        if ! zpool status &amp;quot;${pool}&amp;quot; | grep -q &amp;quot;scrub in progress&amp;quot;; then&lt;br /&gt;
                # Ignore errors and continue with scrubbing other pools.&lt;br /&gt;
                zpool scrub &amp;quot;${pool}&amp;quot; || true&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Scrub all healthy pools that are not already scrubbing as per their configs.&lt;br /&gt;
zpool list -H -o health,name 2&amp;gt;&amp;amp;1 | \&lt;br /&gt;
        awk -F&#039;\t&#039; &#039;$1 == &amp;quot;ONLINE&amp;quot; {print $2}&#039; | \&lt;br /&gt;
while read pool&lt;br /&gt;
do&lt;br /&gt;
        # read user-defined config&lt;br /&gt;
        ret=$(get_property &amp;quot;${pool}&amp;quot;)&lt;br /&gt;
        if [ $? -ne 0 ] || [ &amp;quot;disable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                :&lt;br /&gt;
        elif [ &amp;quot;-&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;auto&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;enable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                scrub_if_not_scrub_in_progress &amp;quot;${pool}&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
                cat &amp;gt; /dev/stderr &amp;lt;&amp;lt;EOF&lt;br /&gt;
$0: [WARNING] illegal value &amp;quot;${ret}&amp;quot; for property &amp;quot;${PROPERTY_NAME}&amp;quot; of ZFS dataset &amp;quot;${pool}&amp;quot;.&lt;br /&gt;
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto.&lt;br /&gt;
EOF&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Then make the script executable&lt;br /&gt;
{{cmd|# chmod +x /usr/libexec/zfs/scrub}}&lt;br /&gt;
&lt;br /&gt;
== Launching the scrub script with cron ==&lt;br /&gt;
&lt;br /&gt;
It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.&amp;lt;br&amp;gt;&lt;br /&gt;
Here, the scrub will be launched once a month, on the 2nd sunday of the month.&lt;br /&gt;
&lt;br /&gt;
In root, edit your crontabs&lt;br /&gt;
{{cmd|# crontab -e}}&lt;br /&gt;
&lt;br /&gt;
and add these 2 lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# zfs scrub the second sunday of every month&lt;br /&gt;
24      0       8-14    *       *       if [ $(date +\%w) -eq 0 ] &amp;amp;&amp;amp; [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, make sure cron is launched:&lt;br /&gt;
{{cmd|# rc-update}}&lt;br /&gt;
&lt;br /&gt;
There should be a line saying&lt;br /&gt;
&amp;lt;pre&amp;gt;crond |      default&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, add it to the boot sequence&lt;br /&gt;
{{cmd|# rc-update add crond}}&lt;br /&gt;
&lt;br /&gt;
then start the crond daemon&lt;br /&gt;
{{cmd|# rc-service crond start}}&lt;br /&gt;
&lt;br /&gt;
= Trim =&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.&amp;lt;br&amp;gt;&lt;br /&gt;
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.&lt;br /&gt;
&lt;br /&gt;
== Creating the script ==&lt;br /&gt;
&lt;br /&gt;
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.&amp;lt;br&amp;gt;&lt;br /&gt;
We will write it in {{path|/usr/libexec/zfs/trim}}&amp;lt;br&amp;gt;&lt;br /&gt;
It&#039;s taken from debian zfs scripts&lt;br /&gt;
&lt;br /&gt;
{{cat|/usr/libexec/zfs/trim|&amp;lt;nowiki&amp;gt;#/bin/sh -eu&lt;br /&gt;
&lt;br /&gt;
# directly exit successfully when zfs module is not loaded&lt;br /&gt;
if ! [ -d /sys/module/zfs ]; then&lt;br /&gt;
        exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# [auto] / enable / disable&lt;br /&gt;
PROPERTY_NAME=&amp;quot;org.alpine:periodic-trim&amp;quot;&lt;br /&gt;
&lt;br /&gt;
get_property () {&lt;br /&gt;
        # Detect the ${PROPERTY_NAME} property on a given pool.&lt;br /&gt;
        # We are abusing user-defined properties on the root dataset,&lt;br /&gt;
        # since they&#039;re not available on pools https://github.com/openzfs/zfs/pull/11680&lt;br /&gt;
        # TODO: use zpool user-defined property when such feature is available.&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        zfs get -H -o value &amp;quot;${PROPERTY_NAME}&amp;quot; &amp;quot;${pool}&amp;quot; 2&amp;gt;/dev/null || return 1&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
trim_if_not_already_trimming () {&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        if ! zpool status &amp;quot;${pool}&amp;quot; | grep -q &amp;quot;trimming&amp;quot;; then&lt;br /&gt;
                # Ignore errors (i.e. HDD pools),&lt;br /&gt;
                # and continue with trimming other pools.&lt;br /&gt;
                zpool trim &amp;quot;${pool}&amp;quot; || true&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
zpool_is_nvme_only () {&lt;br /&gt;
        zpool=$1&lt;br /&gt;
        # get a list of devices attached to the specified zpool&lt;br /&gt;
        zpool list -vHPL &amp;quot;${zpool}&amp;quot; |&lt;br /&gt;
                awk -F&#039;\t&#039; &#039;$2 ~ /^\/dev\// {&lt;br /&gt;
                        if($2 !~ /^\/dev\/nvme/)&lt;br /&gt;
                                exit 1&lt;br /&gt;
                }&#039;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# TRIM all healthy pools that are not already trimming as per their configs.&lt;br /&gt;
zpool list -H -o health,name 2&amp;gt;&amp;amp;1 | \&lt;br /&gt;
        awk -F&#039;\t&#039; &#039;$1 == &amp;quot;ONLINE&amp;quot; {print $2}&#039; | \&lt;br /&gt;
while read pool&lt;br /&gt;
do&lt;br /&gt;
        # read user-defined config&lt;br /&gt;
        ret=$(get_property &amp;quot;${pool}&amp;quot;)&lt;br /&gt;
        if [ $? -ne 0 ] || [ &amp;quot;disable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                :&lt;br /&gt;
        elif [ &amp;quot;enable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                trim_if_not_already_trimming &amp;quot;${pool}&amp;quot;&lt;br /&gt;
        elif [ &amp;quot;-&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;auto&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                if zpool_is_nvme_only &amp;quot;${pool}&amp;quot;; then&lt;br /&gt;
                        trim_if_not_already_trimming &amp;quot;${pool}&amp;quot;&lt;br /&gt;
                fi&lt;br /&gt;
        else&lt;br /&gt;
                cat &amp;gt; /dev/stderr &amp;lt;&amp;lt;EOF&lt;br /&gt;
$0: [WARNING] illegal value &amp;quot;${ret}&amp;quot; for property &amp;quot;${PROPERTY_NAME}&amp;quot; of ZFS dataset &amp;quot;${pool}&amp;quot;.&lt;br /&gt;
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto.&lt;br /&gt;
EOF&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Then make the script executable&lt;br /&gt;
{{cmd|# chmod +x /usr/libexec/zfs/trim}}&lt;br /&gt;
&lt;br /&gt;
== Launching the scrub script with cron ==&lt;br /&gt;
&lt;br /&gt;
Here, the scrub will be launched once a month, on the 1st sunday of the month.&lt;br /&gt;
&lt;br /&gt;
In root, edit your crontabs&lt;br /&gt;
{{cmd|# crontab -e}}&lt;br /&gt;
&lt;br /&gt;
and add these 2 lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# zfs trim the first sunday of every month&lt;br /&gt;
24      0       1-7    *       *       if [ $(date +\%w) -eq 0 ] &amp;amp;&amp;amp; [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, make sure cron is launched:&lt;br /&gt;
{{cmd|# rc-update}}&lt;br /&gt;
&lt;br /&gt;
There should be a line saying&lt;br /&gt;
&amp;lt;pre&amp;gt;crond |      default&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, add it to the boot sequence&lt;br /&gt;
{{cmd|# rc-update add crond}}&lt;br /&gt;
&lt;br /&gt;
then start the crond daemon&lt;br /&gt;
{{cmd|# rc-service crond start}}&lt;br /&gt;
&lt;br /&gt;
[[Category:File systems]]&lt;/div&gt;</summary>
		<author><name>Cyclisme24</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=ZFS&amp;diff=25380</id>
		<title>ZFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=ZFS&amp;diff=25380"/>
		<updated>2023-10-24T15:47:21Z</updated>

		<summary type="html">&lt;p&gt;Cyclisme24: /* Creating the script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
On alpine Linux, there in no cron/script provided to scrub (and eventually trim) your pool(s) on a regular basis, like in other Linux distributions.&amp;lt;br&amp;gt;&lt;br /&gt;
Setting it up is easy and can be done in a few minutes.&lt;br /&gt;
&lt;br /&gt;
= Scrub =&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub&amp;lt;br&amp;gt;&lt;br /&gt;
When scrubbing a pool with encrypted filesystems the keys do not need to be loaded. However, if the keys are not loaded and an unrepairable checksum error is detected the file name cannot be included in the zpool status -v verbose error report.&amp;lt;br&amp;gt;&lt;br /&gt;
A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O.&lt;br /&gt;
&lt;br /&gt;
Also see [https://blogs.oracle.com/oracle-systems/post/disk-scrub-why-and-when Oracle - Disk Scrub - Why and When?]&lt;br /&gt;
&lt;br /&gt;
== Creating the script ==&lt;br /&gt;
&lt;br /&gt;
This script is used to list the pools, make sure they are online, and no scrub is being done at the time.&amp;lt;br&amp;gt;&lt;br /&gt;
We will write it in {{path|/usr/libexec/zfs/scrub}}&amp;lt;br&amp;gt;&lt;br /&gt;
It&#039;s taken from debian zfs scripts&lt;br /&gt;
&lt;br /&gt;
{{cat|/usr/libexec/zfs/scrub|&amp;lt;nowiki&amp;gt;#!/bin/sh -eu&lt;br /&gt;
&lt;br /&gt;
# directly exit successfully when zfs module is not loaded&lt;br /&gt;
if ! [ -d /sys/module/zfs ]; then&lt;br /&gt;
        exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# [auto] / enable / disable&lt;br /&gt;
PROPERTY_NAME=&amp;quot;org.alpine:periodic-scrub&amp;quot;&lt;br /&gt;
&lt;br /&gt;
get_property () {&lt;br /&gt;
        # Detect the ${PROPERTY_NAME} property on a given pool.&lt;br /&gt;
        # We are abusing user-defined properties on the root dataset,&lt;br /&gt;
        # since they&#039;re not available on pools https://github.com/openzfs/zfs/pull/11680&lt;br /&gt;
        # TODO: use zpool user-defined property when such feature is available.&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        zfs get -H -o value &amp;quot;${PROPERTY_NAME}&amp;quot; &amp;quot;${pool}&amp;quot; 2&amp;gt;/dev/null || return 1&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
scrub_if_not_scrub_in_progress () {&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        if ! zpool status &amp;quot;${pool}&amp;quot; | grep -q &amp;quot;scrub in progress&amp;quot;; then&lt;br /&gt;
                # Ignore errors and continue with scrubbing other pools.&lt;br /&gt;
                zpool scrub &amp;quot;${pool}&amp;quot; || true&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Scrub all healthy pools that are not already scrubbing as per their configs.&lt;br /&gt;
zpool list -H -o health,name 2&amp;gt;&amp;amp;1 | \&lt;br /&gt;
        awk -F&#039;\t&#039; &#039;$1 == &amp;quot;ONLINE&amp;quot; {print $2}&#039; | \&lt;br /&gt;
while read pool&lt;br /&gt;
do&lt;br /&gt;
        # read user-defined config&lt;br /&gt;
        ret=$(get_property &amp;quot;${pool}&amp;quot;)&lt;br /&gt;
        if [ $? -ne 0 ] || [ &amp;quot;disable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                :&lt;br /&gt;
        elif [ &amp;quot;-&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;auto&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;enable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                scrub_if_not_scrub_in_progress &amp;quot;${pool}&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
                cat &amp;gt; /dev/stderr &amp;lt;&amp;lt;EOF&lt;br /&gt;
$0: [WARNING] illegal value &amp;quot;${ret}&amp;quot; for property &amp;quot;${PROPERTY_NAME}&amp;quot; of ZFS dataset &amp;quot;${pool}&amp;quot;.&lt;br /&gt;
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto.&lt;br /&gt;
EOF&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Then make the script executable&lt;br /&gt;
{{cmd|# chmod +x /usr/libexec/zfs/scrub}}&lt;br /&gt;
&lt;br /&gt;
== Launching the scrub script with cron ==&lt;br /&gt;
&lt;br /&gt;
It is recommended to launch a scrub regularly to assure your pool(s) and datas are in good shape.&amp;lt;br&amp;gt;&lt;br /&gt;
Here, the scrub will be launched once a month, on the 2nd sunday of the month.&lt;br /&gt;
&lt;br /&gt;
In root, edit your crontabs&lt;br /&gt;
{{cmd|# crontab -e}}&lt;br /&gt;
&lt;br /&gt;
and add these 2 lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# zfs scrub the second sunday of every month&lt;br /&gt;
24      0       8-14    *       *       if [ $(date +\%w) -eq 0 ] &amp;amp;&amp;amp; [ -x /usr/libexec/zfs/scrub ]; then /usr/libexec/zfs/scrub; fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, make sure cron is launched:&lt;br /&gt;
{{cmd|# rc-update}}&lt;br /&gt;
&lt;br /&gt;
There should be a line saying&lt;br /&gt;
&amp;lt;pre&amp;gt;crond |      default&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, add it to the boot sequence&lt;br /&gt;
{{cmd|# rc-update add crond}}&lt;br /&gt;
&lt;br /&gt;
then start the crond daemon&lt;br /&gt;
{{cmd|# rc-service crond start}}&lt;br /&gt;
&lt;br /&gt;
= Trim =&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space.&amp;lt;br&amp;gt;&lt;br /&gt;
A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.&lt;br /&gt;
&lt;br /&gt;
== Creating the script ==&lt;br /&gt;
&lt;br /&gt;
This script is used to list the pools, make sure they are online, build only with NVME ssd drive(s) and no trim is being done at the time.&amp;lt;br&amp;gt;&lt;br /&gt;
We will write it in {{path|/usr/libexec/zfs/trim}}&amp;lt;br&amp;gt;&lt;br /&gt;
It&#039;s taken from debian zfs scripts&lt;br /&gt;
&lt;br /&gt;
{{cat|/usr/libexec/zfs/trim|#/bin/sh -eu&lt;br /&gt;
&lt;br /&gt;
# directly exit successfully when zfs module is not loaded&lt;br /&gt;
if ! [ -d /sys/module/zfs ]; then&lt;br /&gt;
        exit 0&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# [auto] / enable / disable&lt;br /&gt;
PROPERTY_NAME=&amp;quot;org.alpine:periodic-trim&amp;quot;&lt;br /&gt;
&lt;br /&gt;
get_property () {&lt;br /&gt;
        # Detect the ${PROPERTY_NAME} property on a given pool.&lt;br /&gt;
        # We are abusing user-defined properties on the root dataset,&lt;br /&gt;
        # since they&#039;re not available on pools https://github.com/openzfs/zfs/pull/11680&lt;br /&gt;
        # TODO: use zpool user-defined property when such feature is available.&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        zfs get -H -o value &amp;quot;${PROPERTY_NAME}&amp;quot; &amp;quot;${pool}&amp;quot; 2&amp;gt;/dev/null || return 1&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
trim_if_not_already_trimming () {&lt;br /&gt;
        pool=&amp;quot;$1&amp;quot;&lt;br /&gt;
        if ! zpool status &amp;quot;${pool}&amp;quot; | grep -q &amp;quot;trimming&amp;quot;; then&lt;br /&gt;
                # Ignore errors (i.e. HDD pools),&lt;br /&gt;
                # and continue with trimming other pools.&lt;br /&gt;
                zpool trim &amp;quot;${pool}&amp;quot; || true&lt;br /&gt;
        fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
zpool_is_nvme_only () {&lt;br /&gt;
        zpool=$1&lt;br /&gt;
        # get a list of devices attached to the specified zpool&lt;br /&gt;
        zpool list -vHPL &amp;quot;${zpool}&amp;quot; |&lt;br /&gt;
                awk -F&#039;\t&#039; &#039;$2 ~ /^\/dev\// {&lt;br /&gt;
                        if($2 !~ /^\/dev\/nvme/)&lt;br /&gt;
                                exit 1&lt;br /&gt;
                }&#039;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# TRIM all healthy pools that are not already trimming as per their configs.&lt;br /&gt;
zpool list -H -o health,name 2&amp;gt;&amp;amp;1 | \&lt;br /&gt;
        awk -F&#039;\t&#039; &#039;$1 == &amp;quot;ONLINE&amp;quot; {print $2}&#039; | \&lt;br /&gt;
while read pool&lt;br /&gt;
do&lt;br /&gt;
        # read user-defined config&lt;br /&gt;
        ret=$(get_property &amp;quot;${pool}&amp;quot;)&lt;br /&gt;
        if [ $? -ne 0 ] || [ &amp;quot;disable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                :&lt;br /&gt;
        elif [ &amp;quot;enable&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                trim_if_not_already_trimming &amp;quot;${pool}&amp;quot;&lt;br /&gt;
        elif [ &amp;quot;-&amp;quot; = &amp;quot;${ret}&amp;quot; ] || [ &amp;quot;auto&amp;quot; = &amp;quot;${ret}&amp;quot; ]; then&lt;br /&gt;
                if zpool_is_nvme_only &amp;quot;${pool}&amp;quot;; then&lt;br /&gt;
                        trim_if_not_already_trimming &amp;quot;${pool}&amp;quot;&lt;br /&gt;
                fi&lt;br /&gt;
        else&lt;br /&gt;
                cat &amp;gt; /dev/stderr &amp;lt;&amp;lt;EOF&lt;br /&gt;
$0: [WARNING] illegal value &amp;quot;${ret}&amp;quot; for property &amp;quot;${PROPERTY_NAME}&amp;quot; of ZFS dataset &amp;quot;${pool}&amp;quot;.&lt;br /&gt;
$0: Acceptable choices for this property are: auto, enable, disable. The default is auto.&lt;br /&gt;
EOF&lt;br /&gt;
        fi&lt;br /&gt;
done&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Then make the script executable&lt;br /&gt;
{{cmd|# chmod +x /usr/libexec/zfs/trim}}&lt;br /&gt;
&lt;br /&gt;
== Launching the scrub script with cron ==&lt;br /&gt;
&lt;br /&gt;
Here, the scrub will be launched once a month, on the 1st sunday of the month.&lt;br /&gt;
&lt;br /&gt;
In root, edit your crontabs&lt;br /&gt;
{{cmd|# crontab -e}}&lt;br /&gt;
&lt;br /&gt;
and add these 2 lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# zfs trim the first sunday of every month&lt;br /&gt;
24      0       1-7    *       *       if [ $(date +\%w) -eq 0 ] &amp;amp;&amp;amp; [ -x /usr/libexec/zfs/trim ]; then /usr/libexec/zfs/trim; fi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, make sure cron is launched:&lt;br /&gt;
{{cmd|# rc-update}}&lt;br /&gt;
&lt;br /&gt;
There should be a line saying&lt;br /&gt;
&amp;lt;pre&amp;gt;crond |      default&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If not, add it to the boot sequence&lt;br /&gt;
{{cmd|# rc-update add crond}}&lt;br /&gt;
&lt;br /&gt;
then start the crond daemon&lt;br /&gt;
{{cmd|# rc-service crond start}}&lt;br /&gt;
&lt;br /&gt;
[[Category:File systems]]&lt;/div&gt;</summary>
		<author><name>Cyclisme24</name></author>
	</entry>
</feed>