<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=R3</id>
	<title>Alpine Linux - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=R3"/>
	<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/wiki/Special:Contributions/R3"/>
	<updated>2026-04-30T04:19:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.40.0</generator>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18521</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18521"/>
		<updated>2021-01-07T14:19:52Z</updated>

		<summary type="html">&lt;p&gt;R3: del&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Setting up  Alpine Linux using ZFS with a pool that uses ZFS&#039; native encryption capabilities =&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel mods at the time of this writing (2020.07.10)&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Initial setup ==&lt;br /&gt;
&lt;br /&gt;
Run the following&lt;br /&gt;
&lt;br /&gt;
    setup-alpine&lt;br /&gt;
&lt;br /&gt;
Answer all the questions, and hit ctrl-c when promted for what disk you&#039;d like to use.&lt;br /&gt;
&lt;br /&gt;
== OPTIONAL ==&lt;br /&gt;
&lt;br /&gt;
This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands into my terminal window from these instructions.&lt;br /&gt;
&lt;br /&gt;
Edit `/etc/ssh/sshd_config` and search for `Permit`. Change the value after `PermitRootLogin` to read `yes`&lt;br /&gt;
&lt;br /&gt;
save and exit to shell. Run `service sshd restart`&lt;br /&gt;
&lt;br /&gt;
Now you can ssh in as root. Do not forget to go back and comment this line out when you&#039;re done since it will be enabled on your resulting machine. You will be reminded again at the end of this doc.&lt;br /&gt;
&lt;br /&gt;
== Add needed packages  ==&lt;br /&gt;
&lt;br /&gt;
    apk add zfs sfdisk e2fsprogs syslinux&lt;br /&gt;
&lt;br /&gt;
== Create our partitions ==&lt;br /&gt;
&lt;br /&gt;
We&#039;re assuming `/dev/sda` here and in the rest of the document but you can use whatever you need to. To see a list, type: `sfdisk -l`&lt;br /&gt;
&lt;br /&gt;
    echo -e &amp;quot;/dev/sda1: start=1M,size=100M,bootable\n/dev/sda2: start=101M&amp;quot; | sfdisk --quiet --label dos /dev/sda&lt;br /&gt;
&lt;br /&gt;
== Create device nodes ==&lt;br /&gt;
&lt;br /&gt;
    mdev -s&lt;br /&gt;
&lt;br /&gt;
== Create the /boot filesystem ==&lt;br /&gt;
&lt;br /&gt;
    mkfs.ext4 /dev/sda1&lt;br /&gt;
&lt;br /&gt;
== Create the root filesystem using zfs ==&lt;br /&gt;
&lt;br /&gt;
    modprobe zfs&lt;br /&gt;
    zpool create -f -o ashift=12 \&lt;br /&gt;
        -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
        -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
        -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
        -O mountpoint=/ -R /mnt \&lt;br /&gt;
        rpool /dev/sda2&lt;br /&gt;
&lt;br /&gt;
You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely [https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects the weakest link in this setup].&lt;br /&gt;
&lt;br /&gt;
A few notes on the options supplied to zpool:&lt;br /&gt;
&lt;br /&gt;
- `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors&lt;br /&gt;
&lt;br /&gt;
- `acltype=posixacl` enables POSIX ACLs globally&lt;br /&gt;
&lt;br /&gt;
- `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.&lt;br /&gt;
&lt;br /&gt;
- `xattr=sa` vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don&#039;t specify this option.&lt;br /&gt;
&lt;br /&gt;
After completing this, confirm that the pool has been created:&lt;br /&gt;
&lt;br /&gt;
    # zpool status&lt;br /&gt;
&lt;br /&gt;
Should return something like:&lt;br /&gt;
&lt;br /&gt;
      pool: rpool&lt;br /&gt;
     state: ONLINE&lt;br /&gt;
      scan: none requested&lt;br /&gt;
    config:&lt;br /&gt;
&lt;br /&gt;
        NAME        STATE     READ WRITE CKSUM&lt;br /&gt;
        rpool       ONLINE       0     0     0&lt;br /&gt;
          sda2      ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
    errors: No known data errors&lt;br /&gt;
&lt;br /&gt;
== Create the required datasets and mount root ==&lt;br /&gt;
&lt;br /&gt;
    zfs create -o mountpoint=none -o canmount=off rpool/ROOT&lt;br /&gt;
    zfs create -o mountpoint=legacy rpool/ROOT/alpine&lt;br /&gt;
    mount -t zfs rpool/ROOT/alpine /mnt/&lt;br /&gt;
&lt;br /&gt;
== Mount the `/boot` filesystem ==&lt;br /&gt;
&lt;br /&gt;
    mkdir /mnt/boot/&lt;br /&gt;
    mount -t ext4 /dev/sda1 /mnt/boot/&lt;br /&gt;
&lt;br /&gt;
=== Enable ZFS&#039; services ===&lt;br /&gt;
&lt;br /&gt;
    rc-update add zfs-import sysinit&lt;br /&gt;
    rc-update add zfs-mount sysinit&lt;br /&gt;
&lt;br /&gt;
== Install Alpine Linux ==&lt;br /&gt;
&lt;br /&gt;
    setup-disk /mnt&lt;br /&gt;
    dd if=/usr/share/syslinux/mbr.bin of=/dev/sda # write mbr so we can boot&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reboot and enjoy! ==&lt;br /&gt;
&lt;br /&gt;
😉&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
If you went with the optional step, be sure to disable root login after you reboot.&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18520</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18520"/>
		<updated>2021-01-07T14:13:52Z</updated>

		<summary type="html">&lt;p&gt;R3: rm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Useful links =&lt;br /&gt;
&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, see [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2].&lt;br /&gt;
&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18519</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18519"/>
		<updated>2021-01-07T14:10:52Z</updated>

		<summary type="html">&lt;p&gt;R3: rm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Useful links =&lt;br /&gt;
&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, see [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2].&lt;br /&gt;
&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18517</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18517"/>
		<updated>2021-01-07T08:40:57Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Initramfs */ fix link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Useful links =&lt;br /&gt;
&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [https://g.nu8.org/posts/bieaz/setup/alpine/guide/patch/eudev-zfs-mkinitfs-3.4.5.patch eudev-zfs-mkinitfs-3.4.5.patch], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Talk:Alpine_Linux_with_root_on_ZFS_with_native_encryption&amp;diff=18516</id>
		<title>Talk:Alpine Linux with root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Talk:Alpine_Linux_with_root_on_ZFS_with_native_encryption&amp;diff=18516"/>
		<updated>2021-01-07T00:25:29Z</updated>

		<summary type="html">&lt;p&gt;R3: R3 moved page Talk:Alpine Linux with root on ZFS with native encryption to Talk:Root on ZFS with native encryption: redundant alpine linux in title&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Talk:Root on ZFS with native encryption]]&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Talk:Root_on_ZFS_with_native_encryption&amp;diff=18515</id>
		<title>Talk:Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Talk:Root_on_ZFS_with_native_encryption&amp;diff=18515"/>
		<updated>2021-01-07T00:25:29Z</updated>

		<summary type="html">&lt;p&gt;R3: R3 moved page Talk:Alpine Linux with root on ZFS with native encryption to Talk:Root on ZFS with native encryption: redundant alpine linux in title&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Alpine on ZFS root issues in wiki procedure (v1) =&lt;br /&gt;
&lt;br /&gt;
I made some notes on issues I have encountered while following this guide. I will check these more and see if I can update the wiki with the notes.&lt;br /&gt;
&lt;br /&gt;
You can find the notes here: [https://pastebin.com/7jXtG6pT Notes on pastebin]&lt;br /&gt;
&lt;br /&gt;
~~&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Alpine_Linux_with_root_on_ZFS_with_native_encryption&amp;diff=18514</id>
		<title>Alpine Linux with root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Alpine_Linux_with_root_on_ZFS_with_native_encryption&amp;diff=18514"/>
		<updated>2021-01-07T00:25:29Z</updated>

		<summary type="html">&lt;p&gt;R3: R3 moved page Alpine Linux with root on ZFS with native encryption to Root on ZFS with native encryption: redundant alpine linux in title&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Root on ZFS with native encryption]]&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18513</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18513"/>
		<updated>2021-01-07T00:25:29Z</updated>

		<summary type="html">&lt;p&gt;R3: R3 moved page Alpine Linux with root on ZFS with native encryption to Root on ZFS with native encryption: redundant alpine linux in title&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Useful links =&lt;br /&gt;
&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [https://g.nu8.org/posts/bieaz/setup/alpine/guide/patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18512</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18512"/>
		<updated>2021-01-07T00:24:25Z</updated>

		<summary type="html">&lt;p&gt;R3: links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Useful links =&lt;br /&gt;
&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [https://g.nu8.org/posts/bieaz/setup/alpine/guide/patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18511</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18511"/>
		<updated>2021-01-07T00:05:47Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Mount datasets at boot */ rm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [https://g.nu8.org/posts/bieaz/setup/alpine/guide/patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18510</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18510"/>
		<updated>2021-01-07T00:05:15Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Create system datasets */ mount boot&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
mkdir $MOUNTPOINT/boot&lt;br /&gt;
mount -t zfs bpool_$poolUUID/BOOT/default $MOUNTPOINT/boot&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [https://g.nu8.org/posts/bieaz/setup/alpine/guide/patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mounting &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset with fstab need &amp;lt;code&amp;gt;mountpoint=legacy&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;umount /boot/efi&lt;br /&gt;
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
mount /boot&lt;br /&gt;
mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18509</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18509"/>
		<updated>2021-01-07T00:02:23Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Initramfs */ fix link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [https://g.nu8.org/posts/bieaz/setup/alpine/guide/patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mounting &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset with fstab need &amp;lt;code&amp;gt;mountpoint=legacy&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;umount /boot/efi&lt;br /&gt;
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
mount /boot&lt;br /&gt;
mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18508</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18508"/>
		<updated>2021-01-07T00:00:44Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
GRUB will fail to detect rpool if rpool has unsupported features, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [[patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch]], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mounting &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset with fstab need &amp;lt;code&amp;gt;mountpoint=legacy&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;umount /boot/efi&lt;br /&gt;
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
mount /boot&lt;br /&gt;
mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18507</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18507"/>
		<updated>2021-01-06T23:59:38Z</updated>

		<summary type="html">&lt;p&gt;R3: update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt;, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit &amp;lt;code&amp;gt;-O keylocation -O keyformat&amp;lt;/code&amp;gt; when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preparation =&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;&#039;&#039;extended&#039;&#039;&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.&lt;br /&gt;
&lt;br /&gt;
Run the following command to setup the live environment, use default &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; option when asked about disks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;setup-alpine&amp;lt;/pre&amp;gt;&lt;br /&gt;
Settings given here will be copied to the target system later by &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
&lt;br /&gt;
Install and setup&amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; (a port of systemd &amp;lt;code&amp;gt;udev&amp;lt;/code&amp;gt; by gentoo) to get block device names.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk update&lt;br /&gt;
apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
modprobe zfs&lt;br /&gt;
setup-udev&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Variables =&lt;br /&gt;
&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;DISK=/dev/disk/by-id/ata-HXY_120G_YS&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use unique disk path instead of &amp;lt;code&amp;gt;/dev/sda&amp;lt;/code&amp;gt; to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
TARGET_USERPWD=&#039;user account password&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&amp;lt;/pre&amp;gt;&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Partitioning =&lt;br /&gt;
&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk --zap-all $DISK&lt;br /&gt;
sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
sgdisk -n3:0:0 $DISK          # root pool&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;Swap&amp;lt;/code&amp;gt; support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
sgdisk -n4:0:0 $DISK          # swap partition&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Create boot and root pool =&lt;br /&gt;
&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no &amp;lt;code&amp;gt;feature@&amp;lt;/code&amp;gt; is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  -o ashift=12 -d \&lt;br /&gt;
  -o feature@async_destroy=enabled \&lt;br /&gt;
  -o feature@bookmarks=enabled \&lt;br /&gt;
  -o feature@embedded_data=enabled \&lt;br /&gt;
  -o feature@empty_bpobj=enabled \&lt;br /&gt;
  -o feature@enabled_txg=enabled \&lt;br /&gt;
  -o feature@extensible_dataset=enabled \&lt;br /&gt;
  -o feature@filesystem_limits=enabled \&lt;br /&gt;
  -o feature@hole_birth=enabled \&lt;br /&gt;
  -o feature@large_blocks=enabled \&lt;br /&gt;
  -o feature@lz4_compress=enabled \&lt;br /&gt;
  -o feature@spacemap_histogram=enabled \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
  -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
  bpool_$poolUUID $DISK-part2&amp;lt;/pre&amp;gt;&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence &amp;lt;code&amp;gt;canmount=off&amp;lt;/code&amp;gt;. The respective &amp;lt;code&amp;gt;mountpoint&amp;lt;/code&amp;gt; properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
  -o ashift=12 \&lt;br /&gt;
  -O encryption=aes-256-gcm \&lt;br /&gt;
  -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
  -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
  -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
  rpool_$poolUUID $DISK-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
== For multi-disk ==&lt;br /&gt;
&lt;br /&gt;
For mirror:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  bpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
zpool create \&lt;br /&gt;
  ... \&lt;br /&gt;
  rpool_$poolUUID mirror \&lt;br /&gt;
  /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
  /dev/disk/by-id/target_disk2-part3&amp;lt;/pre&amp;gt;&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Create system datasets =&lt;br /&gt;
&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
# ash, default with busybox, does not support array&lt;br /&gt;
# this is word splitting&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&amp;lt;/pre&amp;gt;&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside &amp;lt;code&amp;gt;/var/lib&amp;lt;/code&amp;gt;(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for &amp;lt;code&amp;gt;/home&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;lxc&amp;lt;/code&amp;gt; is for Linux container, &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
&lt;br /&gt;
Here we use &amp;lt;code&amp;gt;/boot/efi&amp;lt;/code&amp;gt; as the mountpoint, which is default for GRUB.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system&amp;lt;/pre&amp;gt;&lt;br /&gt;
= System installation =&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export ZPOOL_VDEV_NAME_PATH=1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&amp;lt;/pre&amp;gt;&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; to install system to target disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that grub-probe will still fail despite &amp;lt;code&amp;gt;ZPOOL_VDEV_NAME_PATH=YES&amp;lt;/code&amp;gt; variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
== Chroot ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;m=&#039;dev proc sys&#039;&lt;br /&gt;
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Finish GRUB installation ===&lt;br /&gt;
&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply fix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
Reload&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;source /etc/profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add coreutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Missing root pool ====&lt;br /&gt;
&lt;br /&gt;
Before [https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html this patch] is merged, use the following workaround:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&amp;lt;/pre&amp;gt;&lt;br /&gt;
This replaces GRUB rpool name detection.&lt;br /&gt;
&lt;br /&gt;
==== Generate grub.cfg ====&lt;br /&gt;
&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;grub-mkconfig -o /boot/grub/grub.cfg&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Importing pools on boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;zpool.cache&amp;lt;/code&amp;gt; will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Initramfs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt; version is the following&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:/# apk info mkinitfs&lt;br /&gt;
mkinitfs-3.4.5-r3 description:&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then download [[patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch]], install &amp;lt;code&amp;gt;patch&amp;lt;/code&amp;gt; and patch it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;foolive:~# wget https://g.nu8.org/path-to-patch&lt;br /&gt;
foolive:~# apk add patch&lt;br /&gt;
foolive:~# cd / # must apply patch at root&lt;br /&gt;
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch &lt;br /&gt;
patching file etc/mkinitfs/features.d/eudev.files&lt;br /&gt;
patching file etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
patching file usr/share/mkinitfs/initramfs-init&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Add eudev hook and rebuild ====&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;eudev&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;features=&amp;quot;ata base eudev ide scsi usb virtio nvme zfs&amp;quot;&#039; &amp;gt; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
# order of features is important! this order is tested&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkinitfs $(ls -1 /lib/modules/)&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Mount datasets at boot ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;rc-update add zfs-mount sysinit&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mounting &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset with fstab need &amp;lt;code&amp;gt;mountpoint=legacy&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;umount /boot/efi&lt;br /&gt;
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
mount /boot&lt;br /&gt;
mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Add user ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&amp;lt;/pre&amp;gt;&lt;br /&gt;
Root account is accessed via &amp;lt;code&amp;gt;su&amp;lt;/code&amp;gt; command with root password.&lt;br /&gt;
&lt;br /&gt;
=== Boot environment manager ===&lt;br /&gt;
&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
=== Optional: Enable encrypted swap partition ===&lt;br /&gt;
&lt;br /&gt;
Install &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;apk add cryptsetup&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the front of zfs. Add relevant lines in &amp;lt;code&amp;gt;fstab&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;crypttab&amp;lt;/code&amp;gt;. Replace &amp;lt;code&amp;gt;$DISK&amp;lt;/code&amp;gt; with actual disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
echo /dev/mapper/swap   none     swap    defaults    0   0 &amp;gt;&amp;gt; /etc/fstab&amp;lt;/pre&amp;gt;&lt;br /&gt;
Rebuild initramfs with &amp;lt;code&amp;gt;mkinitfs&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;exit&lt;br /&gt;
zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
zfs snapshot -r bpool_$poolUUID/BOOT/default@install&amp;lt;/pre&amp;gt;&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
 xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;reboot&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
&lt;br /&gt;
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MOUNTPOINT=`mktemp -d`&lt;br /&gt;
ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import rpool without mounting datasets: &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; for not mounting all datasets; &amp;lt;code&amp;gt;-R&amp;lt;/code&amp;gt; for alternate root.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;poolUUID=abc123&lt;br /&gt;
zpool import -N -R $MOUNTPOINT rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Load encryption key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo $ENCRYPTION_PWD | zfs load-key -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
As &amp;lt;code&amp;gt;canmount=noauto&amp;lt;/code&amp;gt; is set for &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list rpool_$poolUUID/ROOT&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt; dataset&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount rpool_$UUID/ROOT/$dataset&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs mount -a&amp;lt;/pre&amp;gt;&lt;br /&gt;
Import bpool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zpool import -N -R $MOUNTPOINT bpool_$UUID&amp;lt;/pre&amp;gt;&lt;br /&gt;
Find and mount the &amp;lt;code&amp;gt;/boot&amp;lt;/code&amp;gt; dataset, same as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;zfs list bpool_$UUID/BOOT&lt;br /&gt;
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint&amp;lt;/pre&amp;gt;&lt;br /&gt;
Chroot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
chroot $MOUNTPOINT /bin/sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
After chroot, mount &amp;lt;code&amp;gt;/efi&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount /boot/efi&amp;lt;/pre&amp;gt;&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
xargs -i{} umount -lf {}&lt;br /&gt;
zpool export bpool_$poolUUID&lt;br /&gt;
zpool export rpool_$poolUUID&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18506</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18506"/>
		<updated>2021-01-06T23:49:20Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Useful links =&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]&lt;br /&gt;
&lt;br /&gt;
= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 sed -i &amp;quot;s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E &#039;[[:blank:]]name&#039; \| cut -d\\\&#039; -f 2\`|&amp;quot;  /etc/grub.d/10_linux&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18505</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18505"/>
		<updated>2021-01-06T23:46:10Z</updated>

		<summary type="html">&lt;p&gt;R3: links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Useful links =&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]&lt;br /&gt;
&lt;br /&gt;
= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18504</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18504"/>
		<updated>2021-01-06T23:35:43Z</updated>

		<summary type="html">&lt;p&gt;R3: Undo revision 18485 by R3 (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18485</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18485"/>
		<updated>2021-01-05T07:00:17Z</updated>

		<summary type="html">&lt;p&gt;R3: add links instead&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Setting up  Alpine Linux using ZFS with a pool that uses ZFS&#039; native encryption capabilities =&lt;br /&gt;
&lt;br /&gt;
== Useful links ==&lt;br /&gt;
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]&lt;br /&gt;
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel mods at the time of this writing (2020.07.10)&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Initial setup ==&lt;br /&gt;
&lt;br /&gt;
Run the following&lt;br /&gt;
&lt;br /&gt;
    setup-alpine&lt;br /&gt;
&lt;br /&gt;
Answer all the questions, and hit ctrl-c when promted for what disk you&#039;d like to use.&lt;br /&gt;
&lt;br /&gt;
== OPTIONAL ==&lt;br /&gt;
&lt;br /&gt;
This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands into my terminal window from these instructions.&lt;br /&gt;
&lt;br /&gt;
Edit `/etc/ssh/sshd_config` and search for `Permit`. Change the value after `PermitRootLogin` to read `yes`&lt;br /&gt;
&lt;br /&gt;
save and exit to shell. Run `service sshd restart`&lt;br /&gt;
&lt;br /&gt;
Now you can ssh in as root. Do not forget to go back and comment this line out when you&#039;re done since it will be enabled on your resulting machine. You will be reminded again at the end of this doc.&lt;br /&gt;
&lt;br /&gt;
== Add needed packages  ==&lt;br /&gt;
&lt;br /&gt;
    apk add zfs sfdisk e2fsprogs syslinux&lt;br /&gt;
&lt;br /&gt;
== Create our partitions ==&lt;br /&gt;
&lt;br /&gt;
We&#039;re assuming `/dev/sda` here and in the rest of the document but you can use whatever you need to. To see a list, type: `sfdisk -l`&lt;br /&gt;
&lt;br /&gt;
    echo -e &amp;quot;/dev/sda1: start=1M,size=100M,bootable\n/dev/sda2: start=101M&amp;quot; | sfdisk --quiet --label dos /dev/sda&lt;br /&gt;
&lt;br /&gt;
== Create device nodes ==&lt;br /&gt;
&lt;br /&gt;
    mdev -s&lt;br /&gt;
&lt;br /&gt;
== Create the /boot filesystem ==&lt;br /&gt;
&lt;br /&gt;
    mkfs.ext4 /dev/sda1&lt;br /&gt;
&lt;br /&gt;
== Create the root filesystem using zfs ==&lt;br /&gt;
&lt;br /&gt;
    modprobe zfs&lt;br /&gt;
    zpool create -f -o ashift=12 \&lt;br /&gt;
        -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
        -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
        -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
        -O mountpoint=/ -R /mnt \&lt;br /&gt;
        rpool /dev/sda2&lt;br /&gt;
&lt;br /&gt;
You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely [https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects the weakest link in this setup].&lt;br /&gt;
&lt;br /&gt;
A few notes on the options supplied to zpool:&lt;br /&gt;
&lt;br /&gt;
- `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors&lt;br /&gt;
&lt;br /&gt;
- `acltype=posixacl` enables POSIX ACLs globally&lt;br /&gt;
&lt;br /&gt;
- `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.&lt;br /&gt;
&lt;br /&gt;
- `xattr=sa` vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don&#039;t specify this option.&lt;br /&gt;
&lt;br /&gt;
After completing this, confirm that the pool has been created:&lt;br /&gt;
&lt;br /&gt;
    # zpool status&lt;br /&gt;
&lt;br /&gt;
Should return something like:&lt;br /&gt;
&lt;br /&gt;
      pool: rpool&lt;br /&gt;
     state: ONLINE&lt;br /&gt;
      scan: none requested&lt;br /&gt;
    config:&lt;br /&gt;
&lt;br /&gt;
        NAME        STATE     READ WRITE CKSUM&lt;br /&gt;
        rpool       ONLINE       0     0     0&lt;br /&gt;
          sda2      ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
    errors: No known data errors&lt;br /&gt;
&lt;br /&gt;
== Create the required datasets and mount root ==&lt;br /&gt;
&lt;br /&gt;
    zfs create -o mountpoint=none -o canmount=off rpool/ROOT&lt;br /&gt;
    zfs create -o mountpoint=legacy rpool/ROOT/alpine&lt;br /&gt;
    mount -t zfs rpool/ROOT/alpine /mnt/&lt;br /&gt;
&lt;br /&gt;
== Mount the `/boot` filesystem ==&lt;br /&gt;
&lt;br /&gt;
    mkdir /mnt/boot/&lt;br /&gt;
    mount -t ext4 /dev/sda1 /mnt/boot/&lt;br /&gt;
&lt;br /&gt;
=== Enable ZFS&#039; services ===&lt;br /&gt;
&lt;br /&gt;
    rc-update add zfs-import sysinit&lt;br /&gt;
    rc-update add zfs-mount sysinit&lt;br /&gt;
&lt;br /&gt;
== Install Alpine Linux ==&lt;br /&gt;
&lt;br /&gt;
    setup-disk /mnt&lt;br /&gt;
    dd if=/usr/share/syslinux/mbr.bin of=/dev/sda # write mbr so we can boot&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reboot and enjoy! ==&lt;br /&gt;
&lt;br /&gt;
😉&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
If you went with the optional step, be sure to disable root login after you reboot.&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18476</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18476"/>
		<updated>2021-01-04T10:13:47Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Importing pools on boot */ warning&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
&lt;br /&gt;
System will fail to boot without this.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=User:R3/Root_on_ZFS_with_Native_Encryption&amp;diff=18475</id>
		<title>User:R3/Root on ZFS with Native Encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=User:R3/Root_on_ZFS_with_Native_Encryption&amp;diff=18475"/>
		<updated>2021-01-04T09:43:10Z</updated>

		<summary type="html">&lt;p&gt;R3: personal page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18474</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18474"/>
		<updated>2021-01-04T09:42:30Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Boot environment manager */ note&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18473</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18473"/>
		<updated>2021-01-04T09:42:02Z</updated>

		<summary type="html">&lt;p&gt;R3: fix link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [[#GRUB fixes]].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [[#Wayland-based_lightweight_desktop]].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18472</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18472"/>
		<updated>2021-01-04T09:41:25Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Boot environment manager */ de&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Desktop Environment =&lt;br /&gt;
See [#Wayland-based_lightweight_desktop].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18471</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18471"/>
		<updated>2021-01-04T09:39:26Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Disk space stat */ sway&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
== Barebone ==&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
== Wayland-based lightweight desktop ==&lt;br /&gt;
This setup is based on Sway Window Manager and Qt apps.&lt;br /&gt;
&lt;br /&gt;
Encrypted swap&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Sway Window Manager and basic utilities&lt;br /&gt;
 apk add sway swayidle swaylock grim i3status&lt;br /&gt;
Terminal&lt;br /&gt;
 apk add alacritty&lt;br /&gt;
Sound&lt;br /&gt;
 apk add alsa-utils&lt;br /&gt;
Utilities&lt;br /&gt;
 apk add vim mutt isync lynx git p7zip proxychains-ng&lt;br /&gt;
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer&lt;br /&gt;
 apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler&lt;br /&gt;
Play videos with hardware accelerated decoding&lt;br /&gt;
 apk add mpv youtube-dl libva-intel-driver&lt;br /&gt;
Firefox&lt;br /&gt;
 apk add firefox-esr&lt;br /&gt;
Add MTP (connect to Android phones) and samba support to file manager&lt;br /&gt;
 apk add gvfs-smb gvfs-mtp&lt;br /&gt;
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons&lt;br /&gt;
 apk add gnome-themes-extra&lt;br /&gt;
Stat&lt;br /&gt;
*rpool used 1.11G&lt;br /&gt;
*bpool used 26.6M&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18470</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18470"/>
		<updated>2021-01-04T08:59:31Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Optional: Enable encrypted swap partition */ fix path&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Alpine_configuration_management_scripts&amp;diff=18469</id>
		<title>Alpine configuration management scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Alpine_configuration_management_scripts&amp;diff=18469"/>
		<updated>2021-01-04T08:31:06Z</updated>

		<summary type="html">&lt;p&gt;R3: source from alpine conf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Feature descriptions for available Alpine Linux setup scripts ({{Path|/sbin/setup-*}}).&lt;br /&gt;
&lt;br /&gt;
These script are from alpine-conf package.&lt;br /&gt;
&lt;br /&gt;
(Some particular example usages can be seen at [[Alpine_newbie_install_manual#Ways_to_install_Alpine_into_machines_or_virtuals|Alpine for new users install manuals]].)&lt;br /&gt;
&lt;br /&gt;
== setup-alpine ==&lt;br /&gt;
&lt;br /&gt;
This is the main Alpine configuration and installation script.&lt;br /&gt;
&lt;br /&gt;
The script interactively walks the user through executing several auxiliary &amp;lt;code&amp;gt;setup-*&amp;lt;/code&amp;gt; scripts, in the order shown below.&lt;br /&gt;
&lt;br /&gt;
The bracketed options represent example configuration choices, formatted as they may be supplied when manually calling the auxiliary setup scripts, or using a &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; &amp;quot;answerfile&amp;quot; (see below).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;code&amp;gt;setup-keymap&amp;lt;/code&amp;gt; [us us]&lt;br /&gt;
# [[#setup-hostname|setup-hostname]] [-n alpine-test]&lt;br /&gt;
# [[#setup-interfaces|setup-interfaces]] [-i &amp;lt; interfaces-file]&lt;br /&gt;
# &amp;lt;code&amp;gt;/etc/init.d/networking --quiet start &amp;amp;&amp;lt;/code&amp;gt;&lt;br /&gt;
# if none of the networking interfaces were configured using dhcp, then: &amp;lt;code&amp;gt;[[#setup-dns|setup-dns]]&amp;lt;/code&amp;gt; [-d example.com -n &amp;quot;192.168.0.1 [...]&amp;quot;]&lt;br /&gt;
# set the root password&lt;br /&gt;
# if not in quick mode, then: &amp;lt;code&amp;gt;[[#setup-timezone|setup-timezone]]&amp;lt;/code&amp;gt; [-z UTC | -z America/New_York | -p EST+5]&lt;br /&gt;
# enable the new hostname (&amp;lt;code&amp;gt;/etc/init.d/hostname --quiet restart&amp;lt;/code&amp;gt;)&lt;br /&gt;
# add &amp;lt;code&amp;gt;networking&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;urandom&amp;lt;/code&amp;gt; to the &#039;&#039;&#039;boot&#039;&#039;&#039; rc level, and &amp;lt;code&amp;gt;acpid&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cron&amp;lt;/code&amp;gt; to the &#039;&#039;&#039;default&#039;&#039;&#039; rc level, and start the &#039;&#039;&#039;boot&#039;&#039;&#039; and &#039;&#039;&#039;default&#039;&#039;&#039; rc services&lt;br /&gt;
# extract the fully-qualified domain name and hostname from {{Path|/etc/resolv.conf}} and &amp;lt;code&amp;gt;hostname&amp;lt;/code&amp;gt;, and update {{Path|/etc/hosts}}&lt;br /&gt;
# &amp;lt;code&amp;gt;[[#setup-proxy|setup-proxy]]&amp;lt;/code&amp;gt; [-q &amp;lt;nowiki&amp;gt;&amp;quot;http://webproxy:8080&amp;quot;&amp;lt;/nowiki&amp;gt;], and activate proxy if it was configured&lt;br /&gt;
# &amp;lt;code&amp;gt;setup-apkrepos&amp;lt;/code&amp;gt; [-r (to select a mirror randomly)]&lt;br /&gt;
# if not in quick mode, then: &amp;lt;code&amp;gt;[[#setup-sshd|setup-sshd]]&amp;lt;/code&amp;gt; [-c openssh | dropbear | none]&lt;br /&gt;
# if not in quick mode, then: &amp;lt;code&amp;gt;setup-ntp&amp;lt;/code&amp;gt; [-c chrony | openntpd | busybox | none]&lt;br /&gt;
# if not in quick mode, then: &amp;lt;code&amp;gt;DEFAULT_DISK=none&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;[[#setup-disk|setup-disk]]&amp;lt;/code&amp;gt; &amp;lt;code&amp;gt;-q&amp;lt;/code&amp;gt; [-m data /dev/sda] (see [[Installation#Installation_Overview]] about the disk modes)&lt;br /&gt;
# if installation mode selected during setup-disk was &amp;quot;data&amp;quot; instead of &amp;quot;sys&amp;quot;, then: &amp;lt;code&amp;gt;setup-lbu&amp;lt;/code&amp;gt; [/media/sdb1]&lt;br /&gt;
# if installation mode selected during setup-disk was &amp;quot;data&amp;quot; instead of &amp;quot;sys&amp;quot;, then: &amp;lt;code&amp;gt;setup-apkcache&amp;lt;/code&amp;gt; [/media/sdb1/cache | none]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; itself accepts the following command-line switches&lt;br /&gt;
&lt;br /&gt;
{{Define|-h|Shows the up-to-date usage help message.}}&lt;br /&gt;
&lt;br /&gt;
{{Define|-a|Create an overlay file: this creates a temporary directory and saves its location in ROOT; however, the script doesn&#039;t export this variable so I think this feature isn&#039;t currently functional.}}&lt;br /&gt;
;-c &amp;lt;var&amp;gt;answerfile&amp;lt;/var&amp;gt;&lt;br /&gt;
:Create a new &amp;quot;answerfile&amp;quot;, with default choices. You can edit the file and then invoke &amp;lt;code&amp;gt;setup-alpine -f &amp;lt;var&amp;gt;answerfile&amp;lt;/var&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
;-f &amp;lt;var&amp;gt;answerfile&amp;lt;/var&amp;gt;&lt;br /&gt;
:Use an existing &amp;quot;answerfile&amp;quot;, which may override some or all of the interactive prompts.&lt;br /&gt;
{{Define|-q|Run in &amp;quot;quick mode&amp;quot;.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== setup-hostname ==&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-hostname&amp;lt;/code&amp;gt; [-h] [-n hostname]&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-h&#039;&#039;&#039; &amp;lt;var&amp;gt;Show help&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-n&#039;&#039;&#039; &amp;lt;var&amp;gt;Specify hostname&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script allows quick and easy setup of the system hostname by writing it to {{Path|/etc/hostname}}.  The script prevents you from writing an invalid hostname (such as one that used invalid characters or starts with a &#039;-&#039; or is too long).&lt;br /&gt;
The script can be invoked manually or is called as part of the &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; script.&lt;br /&gt;
&lt;br /&gt;
== setup-interfaces ==&lt;br /&gt;
{{Cmd|setup-interfaces [-i &amp;amp;lt; &amp;lt;var&amp;gt;interfaces-file&amp;lt;/var&amp;gt;]}}&lt;br /&gt;
&lt;br /&gt;
Note that the contents of &amp;lt;var&amp;gt;interfaces-file&amp;lt;/var&amp;gt; has to be supplied as stdin, rather than naming the file as an additional argument. The contents should have the format of {{Path|/etc/network/interfaces}}, such as:&lt;br /&gt;
&lt;br /&gt;
 auto lo&lt;br /&gt;
 iface lo inet loopback&lt;br /&gt;
 &lt;br /&gt;
 auto eth0&lt;br /&gt;
 iface eth0 inet dhcp&lt;br /&gt;
     hostname alpine-test&lt;br /&gt;
&lt;br /&gt;
== setup-dns ==&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-dns&amp;lt;/code&amp;gt; [-h] [-d domain name] [-n name server]&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-h&#039;&#039;&#039; &amp;lt;var&amp;gt;Show help&amp;lt;/var&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-d&#039;&#039;&#039; &amp;lt;var&amp;gt;specify search domain name&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-n&#039;&#039;&#039; &amp;lt;var&amp;gt;name server IP&amp;lt;/var&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The setup-dns script is stored in {{Path|/sbin/setup-dns}} and allows quick and simple setup of DNS servers (and a DNS search domain if required).  Simply running &amp;lt;code&amp;gt;setup-dns&amp;lt;/code&amp;gt; will allow interactive use of the script, or the options can be specified.&lt;br /&gt;
&lt;br /&gt;
The information fed to this script is written to {{Path|/etc/resolv.conf}}&lt;br /&gt;
&lt;br /&gt;
Example usage: {{Cmd|setup-dns -d example.org -n 8.8.8.8}}&lt;br /&gt;
&lt;br /&gt;
Example {{Path|/etc/resolv.conf}}:&lt;br /&gt;
&lt;br /&gt;
 search example.org&lt;br /&gt;
 nameserver 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
It can be run manually but is also invoked in the &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; script unless interfaces are configured for DHCP.&lt;br /&gt;
&lt;br /&gt;
== setup-timezone ==&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-timezone&amp;lt;/code&amp;gt; [-z UTC | -z America/New_York | -p EST+5]&lt;br /&gt;
&lt;br /&gt;
Can pre-select the timezone using either of these switches:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-z&#039;&#039;&#039; &amp;lt;var&amp;gt;subfolder of&amp;lt;/var&amp;gt; {{Path|/usr/share/zoneinfo}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-p&#039;&#039;&#039; &amp;lt;var&amp;gt;POSIX TZ format&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== setup-proxy ==&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-proxy&amp;lt;/code&amp;gt; [-hq] [PROXYURL]&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-h&#039;&#039;&#039; &amp;lt;var&amp;gt;Show help&amp;lt;/var&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-q&#039;&#039;&#039; &amp;lt;var&amp;gt;Quiet mode&amp;lt;/var&amp;gt; prevents changes from taking effect until after reboot&lt;br /&gt;
&lt;br /&gt;
This script requests the system proxy to use in the form &amp;lt;code&amp;gt;http://&amp;lt;proxyurl&amp;gt;:&amp;lt;port&amp;gt;&amp;lt;/code&amp;gt; for example:&lt;br /&gt;
&amp;lt;code&amp;gt;http://10.0.0.1:8080&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To set no system proxy use &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt;.&lt;br /&gt;
This script exports the following environmental variables: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;http_proxy=$proxyurl&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;https_proxy=$proxyurl&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;ftp_proxy=$proxyurl&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;code&amp;gt;$proxyurl&amp;lt;/code&amp;gt; is the value input.  &lt;br /&gt;
If &amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; was chosen then the value it is set to a blank value (and so no proxy is used).&lt;br /&gt;
&lt;br /&gt;
== setup-sshd ==&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-sshd&amp;lt;/code&amp;gt; [-h] [-c choice of SSH daemon]&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-h&#039;&#039;&#039; &amp;lt;var&amp;gt;Show help&amp;lt;/var&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-c&#039;&#039;&#039; &amp;lt;var&amp;gt;SSH daemon&amp;lt;/var&amp;gt; where SSH daemon can be one of the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;openssh&amp;lt;/code&amp;gt; install the {{Pkg|openSSH}} daemon&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;dropbear&amp;lt;/code&amp;gt; install the {{Pkg|dropbear}} daemon&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;none&amp;lt;/code&amp;gt; Do not install an SSH daemon&lt;br /&gt;
&lt;br /&gt;
Example usage: {{Cmd|setup-sshd -c dropbear}}&lt;br /&gt;
&lt;br /&gt;
The setup-sshd script is stored in {{Path|/sbin/setup-sshd}} and allows quick and simple setup of either the OpenSSH or Dropbear SSH daemon &amp;amp; client. &lt;br /&gt;
It can be run manually but is also invoked in the &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; script.&lt;br /&gt;
&lt;br /&gt;
== setup-apkrepos ==&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-apkrepos&amp;lt;/code&amp;gt; [-fhr] [REPO...]&lt;br /&gt;
&lt;br /&gt;
Setup &amp;lt;code&amp;gt;apk&amp;lt;/code&amp;gt; repositories.&lt;br /&gt;
&lt;br /&gt;
options:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-f&#039;&#039;&#039;  &amp;lt;var&amp;gt;Detect and add fastest mirror&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-r&#039;&#039;&#039;  &amp;lt;var&amp;gt;Add a random mirror and do not prompt&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;-1&#039;&#039;&#039;  &amp;lt;var&amp;gt;Add first mirror on the list (normally a CDN)&amp;lt;/var&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is run as part of the &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; script.&lt;br /&gt;
&lt;br /&gt;
== setup-disk ==&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;code&amp;gt;DEFAULT_DISK=none setup-disk -q&amp;lt;/code&amp;gt; [-m data | sys] [&amp;lt;var&amp;gt;mountpoint directory&amp;lt;/var&amp;gt; | /dev/sda ...]&lt;br /&gt;
&lt;br /&gt;
In &amp;quot;sys&amp;quot; mode, it&#039;s an installer, it permanently installs Alpine on the disk, in &amp;quot;data&amp;quot; mode, it provides a larger and persistent /var volume.&lt;br /&gt;
&lt;br /&gt;
This script accepts the following command-line switches:&lt;br /&gt;
&lt;br /&gt;
;-k &amp;lt;var&amp;gt;kernel flavor&amp;lt;/var&amp;gt;&lt;br /&gt;
;-o &amp;lt;var&amp;gt;apkovl file&amp;lt;/var&amp;gt;&lt;br /&gt;
:Restore system from &amp;lt;var&amp;gt;apkovl file&amp;lt;/var&amp;gt;&lt;br /&gt;
;-m data | sys&lt;br /&gt;
:Don&#039;t prompt for installation mode. With &#039;&#039;&#039;-m data&#039;&#039;&#039;, the supplied devices are formatted to use as a {{Path|/var}} volume.&lt;br /&gt;
{{Define|-r|Use RAID1 with a single disk (degraded mode)}}&lt;br /&gt;
{{Define|-L|Create and use volumes in a LVM group}}&lt;br /&gt;
;-s &amp;lt;var&amp;gt;swap size in MB&amp;lt;/var&amp;gt;&lt;br /&gt;
:Use 0 to disable swap&lt;br /&gt;
{{Define|-q|Exit quietly if no disks are found}}&lt;br /&gt;
{{Define|-v|Verbose mode}}&lt;br /&gt;
&lt;br /&gt;
The script also honors the following environment variables:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;BOOT_SIZE&amp;lt;/code&amp;gt;&lt;br /&gt;
:Size of the boot partition in MB; defaults to 100. Only used if &#039;&#039;&#039;-m sys&#039;&#039;&#039; is specified or interactively selected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;SWAP_SIZE&amp;lt;/code&amp;gt;&lt;br /&gt;
:Size of the swap volume in MB; set to 0 to disable swap. If not specified, will default to twice RAM, up to 4096, but won&#039;t be more than 1/3 the size of the smallest disk, and if less than 64 will just be 0. Only used if &#039;&#039;&#039;-m sys&#039;&#039;&#039; is specified or interactively selected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;ROOTFS&amp;lt;/code&amp;gt;&lt;br /&gt;
:Filesystem to use for the / volume; defaults to ext4. Only used if &#039;&#039;&#039;-m sys&#039;&#039;&#039; is specified or interactively selected. Supported filesystems are: ext2 ext3 ext4 btrfs xfs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;BOOTFS&amp;lt;/code&amp;gt;&lt;br /&gt;
:Filesystem to use for the /boot volume; defaults to ext4. Only used if &#039;&#039;&#039;-m sys&#039;&#039;&#039; is specified or interactively selected. Supported filesystems are: ext2 ext3 ext4 btrfs xfs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;VARFS&amp;lt;/code&amp;gt;&lt;br /&gt;
:Filesystem to use for the /var volume; defaults to ext4. Only used if &#039;&#039;&#039;-m data&#039;&#039;&#039; is specified or interactively selected. Supported filesystems are: ext2 ext3 ext4 btrfs xfs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;SYSROOT&amp;lt;/code&amp;gt;&lt;br /&gt;
:Mountpoint to use when creating volumes and doing traditional disk install (&#039;&#039;&#039;-m sys&#039;&#039;&#039;). Defaults to {{Path|/mnt}}.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;MBR&amp;lt;/code&amp;gt;&lt;br /&gt;
:Path of MBR binary code, defaults to {{Path|/usr/share/syslinux/mbr.bin}}.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;BOOTLOADER&amp;lt;/code&amp;gt;&lt;br /&gt;
:Bootloader to use, defaults to syslinux. Supported bootloaders are: grub syslinux zipl.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;DISKLABEL&amp;lt;/code&amp;gt;&lt;br /&gt;
:Disklabel to use, defaults to dos. Supported disklabels are: dos gpt eckd.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Writes to /tmp/ovlfiles, /tmp/alpine-install-diskmode.out, and /tmp/sfdisk.out but that never seems to be used elsewhere. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Partitioning ===&lt;br /&gt;
&lt;br /&gt;
If you have complex partitioning needs, you can partition, format, and mount your volumes manually, then just supply the root mountpoint to &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;. Doing so implicitly behaves as though &#039;&#039;&#039;-m sys&#039;&#039;&#039; had also been specified.&lt;br /&gt;
&lt;br /&gt;
See [[Setting up disks manually]] for more information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== RAID ====&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; will automatically build a RAID array if you supply the &#039;&#039;&#039;-r&#039;&#039;&#039; switch, or if you specify more than one device. The array will always be [https://en.m.wikipedia.org/wiki/Standard_RAID_levels#RAID_1 RAID1] (and [https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-0.90_Superblock_Format --metadata=0.90]) for the /boot volumes, but will be [https://en.m.wikipedia.org/wiki/Standard_RAID_levels#RAID_5 RAID5] (and [https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-1_Superblock_Format --metadata=1.2] for non-boot volumes when 3 or more devices are supplied.&lt;br /&gt;
&lt;br /&gt;
If you instead want to build your RAID array manually, see [[Setting up a software RAID array]]. Then format and mount the disks, and supply the root mountpoint to &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== LVM ====&lt;br /&gt;
&amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt; will automatically build and use volumes in a LVM group if you supply the &#039;&#039;&#039;-L&#039;&#039;&#039; switch. The group and volumes created by the script will have the following names:&lt;br /&gt;
&lt;br /&gt;
* volume group: &#039;&#039;&#039;vg0&#039;&#039;&#039;&lt;br /&gt;
* swap volume: &#039;&#039;&#039;lv_swap&#039;&#039;&#039; (only created when swap size &amp;gt; 0)&lt;br /&gt;
* root volume: &#039;&#039;&#039;lv_root&#039;&#039;&#039; (only created when &#039;&#039;&#039;-m sys&#039;&#039;&#039; is specified or interactively selected)&lt;br /&gt;
* var volume: &#039;&#039;&#039;lv_var&#039;&#039;&#039; (only created when &#039;&#039;&#039;-m data&#039;&#039;&#039; is specified or interactively selected)&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;lv_var&#039;&#039;&#039; or &#039;&#039;&#039;lv_root&#039;&#039;&#039; volumes are created to occupy all remaining space in the volume group.&lt;br /&gt;
&lt;br /&gt;
If you need to change any of these settings, you can use &amp;lt;code&amp;gt;vgrename&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;lvrename&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;lvreduce&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;lvresize&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you instead want to build your LVM system manually, see [[Setting up Logical Volumes with LVM]]. Then format and mount the disks, and supply the root mountpoint to &amp;lt;code&amp;gt;setup-disk&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
=Setup-Disk Usage=&lt;br /&gt;
&lt;br /&gt;
usage: setup-disk [-hqr] [-k kernelflavor] [-m MODE] [-o apkovl] [-s SWAPSIZE]&lt;br /&gt;
		  [MOUNTPOINT | DISKDEV...]&lt;br /&gt;
&lt;br /&gt;
Install alpine on harddisk.&lt;br /&gt;
&lt;br /&gt;
If MOUNTPOINT is specified, then do a traditional disk install with MOUNTPOINT&lt;br /&gt;
as root.&lt;br /&gt;
&lt;br /&gt;
If DISKDEV is specified, then use the specified disk(s) without asking. If&lt;br /&gt;
multiple disks are specified then set them up in a RAID array. If there are&lt;br /&gt;
mode than 2 disks, then use raid level 5 instead of raid level 1.&lt;br /&gt;
&lt;br /&gt;
options:&lt;br /&gt;
 -h  Show this help&lt;br /&gt;
 -m  Use disk for MODE without asking, where MODE is either &#039;data&#039; or &#039;root&#039;&lt;br /&gt;
 -o  Restore system from given apkovl file&lt;br /&gt;
 -k  Use kernelflavor instead of $KERNEL_FLAVOR&lt;br /&gt;
 -L  Use LVM to manage partitions&lt;br /&gt;
 -q  Exit quietly if no disks are found&lt;br /&gt;
 -r  Enable software RAID1 with single disk&lt;br /&gt;
 -s  Use SWAPSIZE MB instead of $SWAP_SIZE MB for swap (Use 0 to disable swap)&lt;br /&gt;
 -v  Be more verbose about what is happening&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Disk Install Styles==&lt;br /&gt;
&lt;br /&gt;
You can select between &#039;sys&#039; or &#039;data&#039;.&lt;br /&gt;
&lt;br /&gt;
sys:&lt;br /&gt;
  This mode is a traditional disk install. The following partitions will be&lt;br /&gt;
  created on the disk: /boot, / (filesystem root) and swap.&lt;br /&gt;
    &lt;br /&gt;
  This mode may be used for development boxes, desktops, virtual servers, etc.&lt;br /&gt;
&lt;br /&gt;
data:&lt;br /&gt;
  This mode uses your disk(s) for data storage, not for the operating system.&lt;br /&gt;
  The system itself will run from tmpfs (RAM).&lt;br /&gt;
&lt;br /&gt;
  Use this mode if you only want to use the disk(s) for a mailspool, databases,&lt;br /&gt;
  logs, etc.&lt;br /&gt;
&lt;br /&gt;
none:&lt;br /&gt;
  Run without installing to disk.&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== setup-lbu ==&lt;br /&gt;
&lt;br /&gt;
This script will only be invoked for by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; when installing &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; installation types (ramdisk)&lt;br /&gt;
&lt;br /&gt;
In the previous step, we set up the disk used for the swap partition and for mounting &amp;lt;code&amp;gt;/var&amp;lt;/code&amp;gt;. Here we set up where &amp;lt;code&amp;gt;lbu commit&amp;lt;/code&amp;gt; will store its backup configuration. See [[Alpine local backup]] for more information.&lt;br /&gt;
&lt;br /&gt;
When started, &amp;lt;code&amp;gt;setup-lbu&amp;lt;/code&amp;gt; will prompt where to store your data. The options it will prompt for will be taken from the directories found in &amp;lt;code&amp;gt;/media&amp;lt;/code&amp;gt; (except for &amp;lt;code&amp;gt;cdrom&amp;lt;/code&amp;gt;). [not sure how these are mounted: are they automatically mounted by setup-lbu? Does the user have to manually mount using another tty?]&lt;br /&gt;
&lt;br /&gt;
== setup-apkcache ==&lt;br /&gt;
&lt;br /&gt;
This script will only be invoked for by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; when installing &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; installation types (ramdisk)&lt;br /&gt;
&lt;br /&gt;
In the previous steps, we setup the disk partitions, and told Alpine where it can save its configuration for ramdisk installations. Here we tell Alpine where to save the apk files that you will want to persist across boots. The apkcache is where apk stores cached packages, such that the system does not need to download them at each boot, and doesn&#039;t have to depend on the network. See [[Local APK cache]] for a detailed explanation.&lt;br /&gt;
&lt;br /&gt;
You should be able to use a partition that you set up in the previous steps.&lt;br /&gt;
&lt;br /&gt;
== setup-bootable ==&lt;br /&gt;
This is a standalone script; it&#039;s not invoked by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; but must be run manually.&lt;br /&gt;
&lt;br /&gt;
Its purpose is to create media that boots into tmpfs by copying the contents of an ISO onto a USB key, CF, or similar media.&lt;br /&gt;
&lt;br /&gt;
For a higher-level walkthrough, see [[Create a Bootable USB#Creating_a_bootable_Alpine_Linux_USB_Stick_from_the_command_line|Creating a bootable Alpine Linux USB Stick from the command line]].&lt;br /&gt;
&lt;br /&gt;
This script accepts the following arguments and command-line switches (you can run &amp;lt;code&amp;gt;setup-bootable -h&amp;lt;/code&amp;gt; to see a usage message).&lt;br /&gt;
&lt;br /&gt;
{{Cmd|setup-bootable &amp;lt;var&amp;gt;source&amp;lt;/var&amp;gt; [&amp;lt;var&amp;gt;dest&amp;lt;/var&amp;gt;]}}&lt;br /&gt;
&lt;br /&gt;
The argument &amp;lt;var&amp;gt;source&amp;lt;/var&amp;gt; can be a directory or an ISO (will be mounted to &amp;lt;code&amp;gt;MNT&amp;lt;/code&amp;gt; or {{Path|/mnt}}) or a URL (will be downloaded with &amp;lt;code&amp;gt;WGET&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;wget&amp;lt;/code&amp;gt;). The argument &amp;lt;var&amp;gt;dest&amp;lt;/var&amp;gt; can be a directory mountpoint, or will default to {{Path|/media/usb}} if not supplied.&lt;br /&gt;
&lt;br /&gt;
{{Define|-k|Keep alpine_dev in {{Path|syslinux.cfg}}; otherwise, replace with UUID.}}&lt;br /&gt;
{{Define|-u|Upgrade mode: keep existing {{Path|syslinux.cfg}} and don&#039;t run &amp;lt;code&amp;gt;syslinux&amp;lt;/code&amp;gt;}}&lt;br /&gt;
{{Define|-f|Overwrite {{Path|syslinux.cfg}} even if &#039;&#039;&#039;-u&#039;&#039;&#039; was specified.}}&lt;br /&gt;
{{Define|-s|Force the running of &amp;lt;code&amp;gt;syslinux&amp;lt;/code&amp;gt; even if &#039;&#039;&#039;-u&#039;&#039;&#039; was specified.}}&lt;br /&gt;
{{Define|-v|Verbose mode}}&lt;br /&gt;
&lt;br /&gt;
The script will ensure that &amp;lt;var&amp;gt;source&amp;lt;/var&amp;gt; and &amp;lt;var&amp;gt;dest&amp;lt;/var&amp;gt; are available; will copy the contents of &amp;lt;var&amp;gt;source&amp;lt;/var&amp;gt; to &amp;lt;var&amp;gt;dest&amp;lt;/var&amp;gt;, ensuring first that there&#039;s enough space; and unless &#039;&#039;&#039;-u&#039;&#039;&#039; was specified, will make &amp;lt;var&amp;gt;dest&amp;lt;/var&amp;gt; bootable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== setup-cryptswap ==&lt;br /&gt;
This is a standalone script; it&#039;s not invoked by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; but must be run manually.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;code&amp;gt;setup-cryptswap&amp;lt;/code&amp;gt; [&amp;lt;var&amp;gt;partition&amp;lt;/var&amp;gt; | none]&lt;br /&gt;
&lt;br /&gt;
{{Todo|Does this script still work? At what stage can it be run: only after setup-alpine?}}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== setup-xorg-base ==&lt;br /&gt;
This is a standalone script; it&#039;s not invoked by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; but must be run manually.&lt;br /&gt;
&lt;br /&gt;
Installs a basic xorg configuration, e.g. among other packages: &amp;lt;code&amp;gt;xorg-server xf86-video-vesa xf86-input-evdev xf86-input-mouse xf86-input-keyboard udev&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional packages may be supplied as arguments to &amp;lt;code&amp;gt;setup-xorg-base&amp;lt;/code&amp;gt;. You might need, for example, some of: &amp;lt;code&amp;gt;xf86-input-synaptics xf86-video-&amp;lt;var&amp;gt;something&amp;lt;/var&amp;gt; xinit&amp;lt;/code&amp;gt;. For Qemu, see [[Qemu#Using_Xorg_inside_Qemu|Qemu]]. For Intel GPUs, see [[Intel Video]].&lt;br /&gt;
&lt;br /&gt;
== Documentation needed ==&lt;br /&gt;
&lt;br /&gt;
=== setup-xen-dom0 ===&lt;br /&gt;
&lt;br /&gt;
=== setup-gparted-desktop ===&lt;br /&gt;
Uses openbox.&lt;br /&gt;
&lt;br /&gt;
This is a standalone script; it&#039;s not invoked by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; but must be run manually.&lt;br /&gt;
&lt;br /&gt;
=== setup-mta ===&lt;br /&gt;
Uses ssmtp.&lt;br /&gt;
&lt;br /&gt;
This is a standalone script; it&#039;s not invoked by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; but must be run manually.&lt;br /&gt;
&lt;br /&gt;
=== setup-acf ===&lt;br /&gt;
This is a standalone script; it&#039;s not invoked by &amp;lt;code&amp;gt;setup-alpine&amp;lt;/code&amp;gt; but must be run manually.&lt;br /&gt;
&lt;br /&gt;
This script was named &amp;lt;code&amp;gt;setup-webconf&amp;lt;/code&amp;gt; before Alpine 1.9 beta 4.&lt;br /&gt;
&lt;br /&gt;
See [[:Category:ACF|ACF pages]] for more information.&lt;br /&gt;
&lt;br /&gt;
=== setup-ntp ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= =&lt;br /&gt;
* [https://docs.alpinelinux.org/ beta.docs.alpinelinux.org]&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18468</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18468"/>
		<updated>2021-01-04T08:27:33Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Format and mount EFI partition */ multidisk&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18467</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18467"/>
		<updated>2021-01-04T08:24:08Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Dataset creation */ different example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 d=&#039;libvirt lxc docker&#039;&lt;br /&gt;
 for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18466</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18466"/>
		<updated>2021-01-04T08:22:50Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Dataset creation */ varlib&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)&lt;br /&gt;
&lt;br /&gt;
Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.&lt;br /&gt;
 for i in {AccountsService,NetworkManager,libvirt,lxc,docker}; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done&lt;br /&gt;
{{ic|AccountsService}} is for GNOME desktop. {{ic|libvirt}} is for storing virtual machine images, etc.&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18465</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18465"/>
		<updated>2021-01-04T07:56:19Z</updated>

		<summary type="html">&lt;p&gt;R3: no-bootfs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
== DO NOT set bootfs property! ==&lt;br /&gt;
Do not set {{ic|bootfs}} on any pool! &lt;br /&gt;
&lt;br /&gt;
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB &#039;&#039;&#039;INVALID&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.&lt;br /&gt;
&lt;br /&gt;
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18464</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18464"/>
		<updated>2021-01-04T04:01:09Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Boot environment manager */ aports&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request].&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18463</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18463"/>
		<updated>2021-01-03T19:07:18Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Boot environment manager */ denpendent&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It hasn&#039;t been packaged for Alpine Linux. To install, download the latest release, extract the files and move scripts to directories defined in Makefile.&lt;br /&gt;
&lt;br /&gt;
It depends on&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18462</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18462"/>
		<updated>2021-01-03T18:30:16Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Add normal user account */ be manager&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Boot environment manager =&lt;br /&gt;
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.&lt;br /&gt;
&lt;br /&gt;
It hasn&#039;t been packaged for Alpine Linux. To install, download the latest release, extract the files and move scripts to directories defined in Makefile.&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18461</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18461"/>
		<updated>2021-01-03T16:25:51Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Add normal user account */ sudo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
Root account is accessed via {{ic|su}} command with root password.&lt;br /&gt;
&lt;br /&gt;
Optionally install {{ic|sudo}} to disable root password and use user&#039;s own password instead.&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18460</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18460"/>
		<updated>2021-01-03T16:02:08Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Recovery in Live environment */ desc&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot Live environment (extended release) and install packages:&lt;br /&gt;
 setup-alpine      # basic settings: keyboard layout, timezone ...&lt;br /&gt;
 apk-add zfs eudev # zfs-utils and persistent device name support&lt;br /&gt;
 setup-udev        # populate persistent names&lt;br /&gt;
 modprobe zfs      # load kernel module&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18459</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18459"/>
		<updated>2021-01-03T15:59:01Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Setup live environment */ wording&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18458</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18458"/>
		<updated>2021-01-03T15:57:45Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Initramfs fixes */ fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18457</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18457"/>
		<updated>2021-01-03T15:55:43Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Finish GRUB installation */ typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in [#GRUB fixes].&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18454</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18454"/>
		<updated>2021-01-03T13:27:45Z</updated>

		<summary type="html">&lt;p&gt;R3: use correct name BusyBox&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from BusyBox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18453</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18453"/>
		<updated>2021-01-03T13:26:27Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.&lt;br /&gt;
&lt;br /&gt;
Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}&lt;br /&gt;
 rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2&amp;gt;/dev/null || true`&lt;br /&gt;
with&lt;br /&gt;
 rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`&lt;br /&gt;
And you must install&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
since {{ic|blkid}} from BusyBox does not support ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18433</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18433"/>
		<updated>2021-01-02T18:38:53Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Install system utilities */ details&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
&lt;br /&gt;
Now run the following command to populate persistent device names in live system:&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
More detail see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name&lt;br /&gt;
&lt;br /&gt;
First, install util-linux:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
blkid from Busybox does not support ZFS filesystem.&lt;br /&gt;
&lt;br /&gt;
Replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        # ZFS pool name is stored as file system label&lt;br /&gt;
        # blkid from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;blkid -s LABEL -o value ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Or with lsblk, also from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;lsblk -no LABEL ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18431</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18431"/>
		<updated>2021-01-01T14:26:25Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Finish GRUB installation */ reload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Reload&lt;br /&gt;
 source /etc/profile&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
More detail see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name&lt;br /&gt;
&lt;br /&gt;
First, install util-linux:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
blkid from Busybox does not support ZFS filesystem.&lt;br /&gt;
&lt;br /&gt;
Replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        # ZFS pool name is stored as file system label&lt;br /&gt;
        # blkid from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;blkid -s LABEL -o value ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Or with lsblk, also from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;lsblk -no LABEL ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18430</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18430"/>
		<updated>2021-01-01T14:25:54Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Finish GRUB installation */  add to profile&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 echo &#039;export ZPOOL_VDEV_NAME_PATH=YES&#039; &amp;gt;&amp;gt; /etc/profile&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
More detail see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name&lt;br /&gt;
&lt;br /&gt;
First, install util-linux:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
blkid from Busybox does not support ZFS filesystem.&lt;br /&gt;
&lt;br /&gt;
Replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        # ZFS pool name is stored as file system label&lt;br /&gt;
        # blkid from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;blkid -s LABEL -o value ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Or with lsblk, also from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;lsblk -no LABEL ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18429</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18429"/>
		<updated>2021-01-01T14:21:20Z</updated>

		<summary type="html">&lt;p&gt;R3: order&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
More detail see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name&lt;br /&gt;
&lt;br /&gt;
First, install util-linux:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
blkid from Busybox does not support ZFS filesystem.&lt;br /&gt;
&lt;br /&gt;
Replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        # ZFS pool name is stored as file system label&lt;br /&gt;
        # blkid from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;blkid -s LABEL -o value ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Or with lsblk, also from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;lsblk -no LABEL ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18428</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18428"/>
		<updated>2021-01-01T02:35:29Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ rm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
More detail see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name&lt;br /&gt;
&lt;br /&gt;
First, install util-linux:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
blkid from Busybox does not support ZFS filesystem.&lt;br /&gt;
&lt;br /&gt;
Replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        # ZFS pool name is stored as file system label&lt;br /&gt;
        # blkid from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;blkid -s LABEL -o value ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Or with lsblk, also from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;lsblk -no LABEL ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18427</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18427"/>
		<updated>2021-01-01T02:33:57Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ blkid&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
More detail see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
This workaround uses {{ic|zdb}}, which does not have a stable output, according to manual page.&lt;br /&gt;
 zdb -l ${GRUB_DEVICE} | awk -F \&#039; &#039;/ name/ { print $2 }&#039;&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
An alternative using blkid from util-linux is:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name&lt;br /&gt;
&lt;br /&gt;
Replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        # ZFS pool name is stored as file system label&lt;br /&gt;
        # blkid from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;blkid -s LABEL -o value ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Or with lsblk, also from util-linux&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;lsblk -no LABEL ${GRUB_DEVICE}&amp;lt;/u&amp;gt;`&lt;br /&gt;
Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18426</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18426"/>
		<updated>2020-12-31T18:12:58Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Optional: Enable encrypted swap partition */ eudev&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
This workaround uses {{ic|zdb}}, which does not have a stable output, according to manual page.&lt;br /&gt;
 zdb -l ${GRUB_DEVICE} | awk -F \&#039; &#039;/ name/ { print $2 }&#039;&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
An alternative using blkid from util-linux is:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
And replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;eval &amp;quot;$(blkid info -o export ${GRUB_DEVICE})&amp;quot; &amp;amp;&amp;amp; echo $LABEL&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Busybox version of blkid will return an empty result.&lt;br /&gt;
&lt;br /&gt;
Choose one to your liking and don&#039;t forget to apply this every GRUB update.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs eudev&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18425</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18425"/>
		<updated>2020-12-31T18:12:25Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Importing pools on boot */ details&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
This workaround uses {{ic|zdb}}, which does not have a stable output, according to manual page.&lt;br /&gt;
 zdb -l ${GRUB_DEVICE} | awk -F \&#039; &#039;/ name/ { print $2 }&#039;&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
An alternative using blkid from util-linux is:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
And replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;eval &amp;quot;$(blkid info -o export ${GRUB_DEVICE})&amp;quot; &amp;amp;&amp;amp; echo $LABEL&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Busybox version of blkid will return an empty result.&lt;br /&gt;
&lt;br /&gt;
Choose one to your liking and don&#039;t forget to apply this every GRUB update.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18424</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18424"/>
		<updated>2020-12-31T17:24:12Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ file name&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
This workaround uses {{ic|zdb}}, which does not have a stable output, according to manual page.&lt;br /&gt;
 zdb -l ${GRUB_DEVICE} | awk -F \&#039; &#039;/ name/ { print $2 }&#039;&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
An alternative using blkid from util-linux is:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
And replace {{ic|/etc/grub.d/10_linux}} with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;eval &amp;quot;$(blkid info -o export ${GRUB_DEVICE})&amp;quot; &amp;amp;&amp;amp; echo $LABEL&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
Busybox version of blkid will return an empty result.&lt;br /&gt;
&lt;br /&gt;
Choose one to your liking and don&#039;t forget to apply this every GRUB update.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18423</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18423"/>
		<updated>2020-12-31T17:23:35Z</updated>

		<summary type="html">&lt;p&gt;R3: /* Missing root pool */ procedure&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 setup-udev&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 m=&#039;dev proc sys&#039;&lt;br /&gt;
 for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== GRUB fixes ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
=== Missing root pool ===&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].&lt;br /&gt;
&lt;br /&gt;
A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
This workaround uses {{ic|zdb}}, which does not have a stable output, according to manual page.&lt;br /&gt;
 zdb -l ${GRUB_DEVICE} | awk -F \&#039; &#039;/ name/ { print $2 }&#039;&lt;br /&gt;
&lt;br /&gt;
As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,&lt;br /&gt;
&lt;br /&gt;
An alternative using blkid from util-linux is:&lt;br /&gt;
 apk add util-linux&lt;br /&gt;
And replace with underline&lt;br /&gt;
        fi;;&lt;br /&gt;
    xzfs)&lt;br /&gt;
        rpool=`&amp;lt;u&amp;gt;eval &amp;quot;$(blkid info -o export ${GRUB_DEVICE})&amp;quot; &amp;amp;&amp;amp; echo $LABEL&amp;lt;/u&amp;gt;`&lt;br /&gt;
        bootfs=&amp;quot;`make_system_path_relative_to_its_root / | sed -e &amp;quot;s,@$,,&amp;quot;`&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
Busybox version of blkid will return an empty result.&lt;br /&gt;
&lt;br /&gt;
Choose one to your liking and don&#039;t forget to apply this every GRUB update.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].&lt;br /&gt;
&lt;br /&gt;
With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.&lt;br /&gt;
 sed -i &#039;s|zfs|zfs eudev|&#039; /etc/mkinitfs/mkinitfs.conf&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs $(ls -1 /lib/modules/)&lt;br /&gt;
&lt;br /&gt;
= Mount datasets at boot =&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit # zfs monitoring&lt;br /&gt;
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:&lt;br /&gt;
 umount /boot/efi&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
 mount /boot&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
&lt;br /&gt;
= Importing pools on boot =&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
Boot extra and install packages:&lt;br /&gt;
 setup-alpine&lt;br /&gt;
 apk-add zfs eudev&lt;br /&gt;
 setup-udev&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 mount -t zfs bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039; $MOUNTPOINT/boot # legacy mountpoint&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R3</name></author>
	</entry>
</feed>