<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=R2</id>
	<title>Alpine Linux - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.alpinelinux.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=R2"/>
	<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/wiki/Special:Contributions/R2"/>
	<updated>2026-05-03T19:22:39Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.40.0</generator>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18399</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18399"/>
		<updated>2020-12-31T04:13:25Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Fix zfs decrypt */ fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
As of this writing, the initramfs script has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # BROKEN: if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # add double quotes around $()&lt;br /&gt;
 if [ &amp;quot;$(zpool list -H -o feature@encryption $_root_pool)&amp;quot; = &amp;quot;active&amp;quot; ]; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}.&lt;br /&gt;
&lt;br /&gt;
Add functions from {{ic|/etc/init.d/udev*}} at the beginning of the file.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev, see /etc/init.d/udev*&lt;br /&gt;
eudev_start_pre() {&lt;br /&gt;
	# load unix domain sockets if built as module, Bug #221253&lt;br /&gt;
	# and not yet loaded, Bug #363549&lt;br /&gt;
	if [ ! -e /proc/net/unix ]; then&lt;br /&gt;
		if ! modprobe unix; then&lt;br /&gt;
			eerror &amp;quot;Cannot load the unix domain socket module&amp;quot;&lt;br /&gt;
			return 1&lt;br /&gt;
		fi&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	if [ -e /proc/sys/kernel/hotplug ]; then&lt;br /&gt;
		echo &amp;quot;&amp;quot; &amp;gt;/proc/sys/kernel/hotplug&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	return 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_dir_writeable()&lt;br /&gt;
{&lt;br /&gt;
        touch &amp;quot;$1&amp;quot;/.test.$$ 2&amp;gt;/dev/null &amp;amp;&amp;amp; rm &amp;quot;$1&amp;quot;/.test.$$&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# store persistent-rules that got created while booting&lt;br /&gt;
# when / was still read-only&lt;br /&gt;
eudev_store_persistent_rules()&lt;br /&gt;
{&lt;br /&gt;
	# create /etc/udev/rules.d if it does not exist and /etc/udev is writable&lt;br /&gt;
	[ -d /etc/udev/rules.d ] || \&lt;br /&gt;
		eudev_dir_writeable /etc/udev &amp;amp;&amp;amp; \&lt;br /&gt;
		mkdir -p /etc/udev/rules.d&lt;br /&gt;
&lt;br /&gt;
	# only continue if rules-directory is writable&lt;br /&gt;
	eudev_dir_writeable /etc/udev/rules.d || return 0&lt;br /&gt;
&lt;br /&gt;
	local file dest&lt;br /&gt;
	for file in /run/udev/tmp-rules--*; do&lt;br /&gt;
		dest=${file##*tmp-rules--}&lt;br /&gt;
		[ &amp;quot;$dest&amp;quot; = &#039;*&#039; ] &amp;amp;&amp;amp; break&lt;br /&gt;
		type=${dest##70-persistent-}&lt;br /&gt;
		type=${type%%.rules}&lt;br /&gt;
		cat &amp;quot;$file&amp;quot; &amp;gt;&amp;gt; /etc/udev/rules.d/&amp;quot;$dest&amp;quot; &amp;amp;&amp;amp; rm -f &amp;quot;$file&amp;quot;&lt;br /&gt;
	done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_start()&lt;br /&gt;
{&lt;br /&gt;
    eudev_start_pre&lt;br /&gt;
    udevd -d&lt;br /&gt;
	# store persistent-rules that got created while booting&lt;br /&gt;
	# when / was still read-only&lt;br /&gt;
	eudev_store_persistent_rules&lt;br /&gt;
	# Populating /dev with existing devices through uevents&amp;quot;&lt;br /&gt;
	udevadm trigger --type=subsystems --action=add&lt;br /&gt;
	udevadm trigger --type=devices --action=add&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then&lt;br /&gt;
    eudev_start&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
To let {{ic|zpool import}} to use persistent names, use zpool.cache to import pool&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
 echo /etc/zfs/zpool.cache &amp;gt;&amp;gt; /etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
replace {{ic|zpool import}} with {{ic|zpool import -c /etc/zfs/zpool.cache}}&lt;br /&gt;
 find and replace&lt;br /&gt;
 -d /dev $_root_pool&lt;br /&gt;
 with&lt;br /&gt;
 -c /etc/zfs/zpool.cache&lt;br /&gt;
 and&lt;br /&gt;
 -d /dev -f $_root_pool&lt;br /&gt;
 with&lt;br /&gt;
 -f -c /etc/zfs/zpool.cache&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18398</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18398"/>
		<updated>2020-12-31T04:00:09Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable persistent device names */ find and replace&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}.&lt;br /&gt;
&lt;br /&gt;
Add functions from {{ic|/etc/init.d/udev*}} at the beginning of the file.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev, see /etc/init.d/udev*&lt;br /&gt;
eudev_start_pre() {&lt;br /&gt;
	# load unix domain sockets if built as module, Bug #221253&lt;br /&gt;
	# and not yet loaded, Bug #363549&lt;br /&gt;
	if [ ! -e /proc/net/unix ]; then&lt;br /&gt;
		if ! modprobe unix; then&lt;br /&gt;
			eerror &amp;quot;Cannot load the unix domain socket module&amp;quot;&lt;br /&gt;
			return 1&lt;br /&gt;
		fi&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	if [ -e /proc/sys/kernel/hotplug ]; then&lt;br /&gt;
		echo &amp;quot;&amp;quot; &amp;gt;/proc/sys/kernel/hotplug&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	return 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_dir_writeable()&lt;br /&gt;
{&lt;br /&gt;
        touch &amp;quot;$1&amp;quot;/.test.$$ 2&amp;gt;/dev/null &amp;amp;&amp;amp; rm &amp;quot;$1&amp;quot;/.test.$$&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# store persistent-rules that got created while booting&lt;br /&gt;
# when / was still read-only&lt;br /&gt;
eudev_store_persistent_rules()&lt;br /&gt;
{&lt;br /&gt;
	# create /etc/udev/rules.d if it does not exist and /etc/udev is writable&lt;br /&gt;
	[ -d /etc/udev/rules.d ] || \&lt;br /&gt;
		eudev_dir_writeable /etc/udev &amp;amp;&amp;amp; \&lt;br /&gt;
		mkdir -p /etc/udev/rules.d&lt;br /&gt;
&lt;br /&gt;
	# only continue if rules-directory is writable&lt;br /&gt;
	eudev_dir_writeable /etc/udev/rules.d || return 0&lt;br /&gt;
&lt;br /&gt;
	local file dest&lt;br /&gt;
	for file in /run/udev/tmp-rules--*; do&lt;br /&gt;
		dest=${file##*tmp-rules--}&lt;br /&gt;
		[ &amp;quot;$dest&amp;quot; = &#039;*&#039; ] &amp;amp;&amp;amp; break&lt;br /&gt;
		type=${dest##70-persistent-}&lt;br /&gt;
		type=${type%%.rules}&lt;br /&gt;
		cat &amp;quot;$file&amp;quot; &amp;gt;&amp;gt; /etc/udev/rules.d/&amp;quot;$dest&amp;quot; &amp;amp;&amp;amp; rm -f &amp;quot;$file&amp;quot;&lt;br /&gt;
	done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_start()&lt;br /&gt;
{&lt;br /&gt;
    eudev_start_pre&lt;br /&gt;
    udevd -d&lt;br /&gt;
	# store persistent-rules that got created while booting&lt;br /&gt;
	# when / was still read-only&lt;br /&gt;
	eudev_store_persistent_rules&lt;br /&gt;
	# Populating /dev with existing devices through uevents&amp;quot;&lt;br /&gt;
	udevadm trigger --type=subsystems --action=add&lt;br /&gt;
	udevadm trigger --type=devices --action=add&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then&lt;br /&gt;
    eudev_start&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
To let {{ic|zpool import}} to use persistent names, use zpool.cache to import pool&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
 echo /etc/zfs/zpool.cache &amp;gt;&amp;gt; /etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
replace {{ic|zpool import}} with {{ic|zpool import -c /etc/zfs/zpool.cache}}&lt;br /&gt;
 find and replace&lt;br /&gt;
 -d /dev $_root_pool&lt;br /&gt;
 with&lt;br /&gt;
 -c /etc/zfs/zpool.cache&lt;br /&gt;
 and&lt;br /&gt;
 -d /dev -f $_root_pool&lt;br /&gt;
 with&lt;br /&gt;
 -f -c /etc/zfs/zpool.cache&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18397</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18397"/>
		<updated>2020-12-31T03:54:19Z</updated>

		<summary type="html">&lt;p&gt;R2: order&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Initramfs fixes =&lt;br /&gt;
== Fix zfs decrypt ==&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
== Enable persistent device names ==&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}.&lt;br /&gt;
&lt;br /&gt;
Add functions from {{ic|/etc/init.d/udev*}} at the beginning of the file.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev, see /etc/init.d/udev*&lt;br /&gt;
eudev_start_pre() {&lt;br /&gt;
	# load unix domain sockets if built as module, Bug #221253&lt;br /&gt;
	# and not yet loaded, Bug #363549&lt;br /&gt;
	if [ ! -e /proc/net/unix ]; then&lt;br /&gt;
		if ! modprobe unix; then&lt;br /&gt;
			eerror &amp;quot;Cannot load the unix domain socket module&amp;quot;&lt;br /&gt;
			return 1&lt;br /&gt;
		fi&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	if [ -e /proc/sys/kernel/hotplug ]; then&lt;br /&gt;
		echo &amp;quot;&amp;quot; &amp;gt;/proc/sys/kernel/hotplug&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	return 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_dir_writeable()&lt;br /&gt;
{&lt;br /&gt;
        touch &amp;quot;$1&amp;quot;/.test.$$ 2&amp;gt;/dev/null &amp;amp;&amp;amp; rm &amp;quot;$1&amp;quot;/.test.$$&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# store persistent-rules that got created while booting&lt;br /&gt;
# when / was still read-only&lt;br /&gt;
eudev_store_persistent_rules()&lt;br /&gt;
{&lt;br /&gt;
	# create /etc/udev/rules.d if it does not exist and /etc/udev is writable&lt;br /&gt;
	[ -d /etc/udev/rules.d ] || \&lt;br /&gt;
		eudev_dir_writeable /etc/udev &amp;amp;&amp;amp; \&lt;br /&gt;
		mkdir -p /etc/udev/rules.d&lt;br /&gt;
&lt;br /&gt;
	# only continue if rules-directory is writable&lt;br /&gt;
	eudev_dir_writeable /etc/udev/rules.d || return 0&lt;br /&gt;
&lt;br /&gt;
	local file dest&lt;br /&gt;
	for file in /run/udev/tmp-rules--*; do&lt;br /&gt;
		dest=${file##*tmp-rules--}&lt;br /&gt;
		[ &amp;quot;$dest&amp;quot; = &#039;*&#039; ] &amp;amp;&amp;amp; break&lt;br /&gt;
		type=${dest##70-persistent-}&lt;br /&gt;
		type=${type%%.rules}&lt;br /&gt;
		cat &amp;quot;$file&amp;quot; &amp;gt;&amp;gt; /etc/udev/rules.d/&amp;quot;$dest&amp;quot; &amp;amp;&amp;amp; rm -f &amp;quot;$file&amp;quot;&lt;br /&gt;
	done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_start()&lt;br /&gt;
{&lt;br /&gt;
    eudev_start_pre&lt;br /&gt;
    udevd -d&lt;br /&gt;
	# store persistent-rules that got created while booting&lt;br /&gt;
	# when / was still read-only&lt;br /&gt;
	eudev_store_persistent_rules&lt;br /&gt;
	# Populating /dev with existing devices through uevents&amp;quot;&lt;br /&gt;
	udevadm trigger --type=subsystems --action=add&lt;br /&gt;
	udevadm trigger --type=devices --action=add&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then&lt;br /&gt;
    eudev_start&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
To let {{ic|zpool import}} to use persistent names, use zpool.cache to import pool&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
 echo /etc/zfs/zpool.cache &amp;gt;&amp;gt; /etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
replace {{ic|zpool import}} with {{ic|zpool import -c /etc/zfs/zpool.cache}}&lt;br /&gt;
 sed -i &#039;s|-d /dev|-c /etc/zfs/zpool.cache|g&#039; /usr/share/mkinitfs/initramfs-init&lt;br /&gt;
Rebuild initramfs with&lt;br /&gt;
 mkinitfs&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18396</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18396"/>
		<updated>2020-12-31T03:47:50Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable persistent device names in initramfs */ import from cache&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Enable persistent device names in initramfs =&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}.&lt;br /&gt;
&lt;br /&gt;
Add functions from {{ic|/etc/init.d/udev*}} at the beginning of the file.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev, see /etc/init.d/udev*&lt;br /&gt;
eudev_start_pre() {&lt;br /&gt;
	# load unix domain sockets if built as module, Bug #221253&lt;br /&gt;
	# and not yet loaded, Bug #363549&lt;br /&gt;
	if [ ! -e /proc/net/unix ]; then&lt;br /&gt;
		if ! modprobe unix; then&lt;br /&gt;
			eerror &amp;quot;Cannot load the unix domain socket module&amp;quot;&lt;br /&gt;
			return 1&lt;br /&gt;
		fi&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	if [ -e /proc/sys/kernel/hotplug ]; then&lt;br /&gt;
		echo &amp;quot;&amp;quot; &amp;gt;/proc/sys/kernel/hotplug&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	return 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_dir_writeable()&lt;br /&gt;
{&lt;br /&gt;
        touch &amp;quot;$1&amp;quot;/.test.$$ 2&amp;gt;/dev/null &amp;amp;&amp;amp; rm &amp;quot;$1&amp;quot;/.test.$$&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# store persistent-rules that got created while booting&lt;br /&gt;
# when / was still read-only&lt;br /&gt;
eudev_store_persistent_rules()&lt;br /&gt;
{&lt;br /&gt;
	# create /etc/udev/rules.d if it does not exist and /etc/udev is writable&lt;br /&gt;
	[ -d /etc/udev/rules.d ] || \&lt;br /&gt;
		eudev_dir_writeable /etc/udev &amp;amp;&amp;amp; \&lt;br /&gt;
		mkdir -p /etc/udev/rules.d&lt;br /&gt;
&lt;br /&gt;
	# only continue if rules-directory is writable&lt;br /&gt;
	eudev_dir_writeable /etc/udev/rules.d || return 0&lt;br /&gt;
&lt;br /&gt;
	local file dest&lt;br /&gt;
	for file in /run/udev/tmp-rules--*; do&lt;br /&gt;
		dest=${file##*tmp-rules--}&lt;br /&gt;
		[ &amp;quot;$dest&amp;quot; = &#039;*&#039; ] &amp;amp;&amp;amp; break&lt;br /&gt;
		type=${dest##70-persistent-}&lt;br /&gt;
		type=${type%%.rules}&lt;br /&gt;
		cat &amp;quot;$file&amp;quot; &amp;gt;&amp;gt; /etc/udev/rules.d/&amp;quot;$dest&amp;quot; &amp;amp;&amp;amp; rm -f &amp;quot;$file&amp;quot;&lt;br /&gt;
	done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_start()&lt;br /&gt;
{&lt;br /&gt;
    eudev_start_pre&lt;br /&gt;
    udevd -d&lt;br /&gt;
	# store persistent-rules that got created while booting&lt;br /&gt;
	# when / was still read-only&lt;br /&gt;
	eudev_store_persistent_rules&lt;br /&gt;
	# Populating /dev with existing devices through uevents&amp;quot;&lt;br /&gt;
	udevadm trigger --type=subsystems --action=add&lt;br /&gt;
	udevadm trigger --type=devices --action=add&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then&lt;br /&gt;
    eudev_start&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
To let {{ic|zpool import}} to use persistent names, use zpool.cache to import pool&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID&lt;br /&gt;
 zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID&lt;br /&gt;
 echo /etc/zfs/zpool.cache &amp;gt;&amp;gt; /etc/mkinitfs/features.d/zfs.files&lt;br /&gt;
replace {{ic|zpool import}} with {{ic|zpool import -c /etc/zfs/zpool.cache}}&lt;br /&gt;
 sed -i &#039;s|-d /dev|-c /etc/zfs/zpool.cache|g&#039; /usr/share/mkinitfs/initramfs-init&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18395</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18395"/>
		<updated>2020-12-31T03:34:44Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable persistent device names in initramfs */ zpool import fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Enable persistent device names in initramfs =&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}.&lt;br /&gt;
&lt;br /&gt;
Add functions from {{ic|/etc/init.d/udev*}} at the beginning of the file.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev, see /etc/init.d/udev*&lt;br /&gt;
eudev_start_pre() {&lt;br /&gt;
	# load unix domain sockets if built as module, Bug #221253&lt;br /&gt;
	# and not yet loaded, Bug #363549&lt;br /&gt;
	if [ ! -e /proc/net/unix ]; then&lt;br /&gt;
		if ! modprobe unix; then&lt;br /&gt;
			eerror &amp;quot;Cannot load the unix domain socket module&amp;quot;&lt;br /&gt;
			return 1&lt;br /&gt;
		fi&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	if [ -e /proc/sys/kernel/hotplug ]; then&lt;br /&gt;
		echo &amp;quot;&amp;quot; &amp;gt;/proc/sys/kernel/hotplug&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	return 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_dir_writeable()&lt;br /&gt;
{&lt;br /&gt;
        touch &amp;quot;$1&amp;quot;/.test.$$ 2&amp;gt;/dev/null &amp;amp;&amp;amp; rm &amp;quot;$1&amp;quot;/.test.$$&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# store persistent-rules that got created while booting&lt;br /&gt;
# when / was still read-only&lt;br /&gt;
eudev_store_persistent_rules()&lt;br /&gt;
{&lt;br /&gt;
	# create /etc/udev/rules.d if it does not exist and /etc/udev is writable&lt;br /&gt;
	[ -d /etc/udev/rules.d ] || \&lt;br /&gt;
		eudev_dir_writeable /etc/udev &amp;amp;&amp;amp; \&lt;br /&gt;
		mkdir -p /etc/udev/rules.d&lt;br /&gt;
&lt;br /&gt;
	# only continue if rules-directory is writable&lt;br /&gt;
	eudev_dir_writeable /etc/udev/rules.d || return 0&lt;br /&gt;
&lt;br /&gt;
	local file dest&lt;br /&gt;
	for file in /run/udev/tmp-rules--*; do&lt;br /&gt;
		dest=${file##*tmp-rules--}&lt;br /&gt;
		[ &amp;quot;$dest&amp;quot; = &#039;*&#039; ] &amp;amp;&amp;amp; break&lt;br /&gt;
		type=${dest##70-persistent-}&lt;br /&gt;
		type=${type%%.rules}&lt;br /&gt;
		cat &amp;quot;$file&amp;quot; &amp;gt;&amp;gt; /etc/udev/rules.d/&amp;quot;$dest&amp;quot; &amp;amp;&amp;amp; rm -f &amp;quot;$file&amp;quot;&lt;br /&gt;
	done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_start()&lt;br /&gt;
{&lt;br /&gt;
    eudev_start_pre&lt;br /&gt;
    udevd -d&lt;br /&gt;
	# store persistent-rules that got created while booting&lt;br /&gt;
	# when / was still read-only&lt;br /&gt;
	eudev_store_persistent_rules&lt;br /&gt;
	# Populating /dev with existing devices through uevents&amp;quot;&lt;br /&gt;
	udevadm trigger --type=subsystems --action=add&lt;br /&gt;
	udevadm trigger --type=devices --action=add&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then&lt;br /&gt;
    eudev_start&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
To let {{ic|zpool import}} to use persistent names, replace {{ic|/dev}} with {{ic|/dev/disk-id}}&lt;br /&gt;
 sed -i &#039;s|zpool import -N -d /dev|zpool import -N -d /dev/disk/by-id|g&#039; /usr/share/mkinitfs/initramfs-init&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18394</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18394"/>
		<updated>2020-12-31T03:29:04Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable persistent device names in initramfs */ update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Enable persistent device names in initramfs =&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}. &lt;br /&gt;
Add functions from {{ic|/etc/init.d/udev*}} at the beginning of the file.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev, see /etc/init.d/udev*&lt;br /&gt;
eudev_start_pre() {&lt;br /&gt;
	# load unix domain sockets if built as module, Bug #221253&lt;br /&gt;
	# and not yet loaded, Bug #363549&lt;br /&gt;
	if [ ! -e /proc/net/unix ]; then&lt;br /&gt;
		if ! modprobe unix; then&lt;br /&gt;
			eerror &amp;quot;Cannot load the unix domain socket module&amp;quot;&lt;br /&gt;
			return 1&lt;br /&gt;
		fi&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	if [ -e /proc/sys/kernel/hotplug ]; then&lt;br /&gt;
		echo &amp;quot;&amp;quot; &amp;gt;/proc/sys/kernel/hotplug&lt;br /&gt;
	fi&lt;br /&gt;
&lt;br /&gt;
	return 0&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_dir_writeable()&lt;br /&gt;
{&lt;br /&gt;
        touch &amp;quot;$1&amp;quot;/.test.$$ 2&amp;gt;/dev/null &amp;amp;&amp;amp; rm &amp;quot;$1&amp;quot;/.test.$$&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# store persistent-rules that got created while booting&lt;br /&gt;
# when / was still read-only&lt;br /&gt;
eudev_store_persistent_rules()&lt;br /&gt;
{&lt;br /&gt;
	# create /etc/udev/rules.d if it does not exist and /etc/udev is writable&lt;br /&gt;
	[ -d /etc/udev/rules.d ] || \&lt;br /&gt;
		eudev_dir_writeable /etc/udev &amp;amp;&amp;amp; \&lt;br /&gt;
		mkdir -p /etc/udev/rules.d&lt;br /&gt;
&lt;br /&gt;
	# only continue if rules-directory is writable&lt;br /&gt;
	eudev_dir_writeable /etc/udev/rules.d || return 0&lt;br /&gt;
&lt;br /&gt;
	local file dest&lt;br /&gt;
	for file in /run/udev/tmp-rules--*; do&lt;br /&gt;
		dest=${file##*tmp-rules--}&lt;br /&gt;
		[ &amp;quot;$dest&amp;quot; = &#039;*&#039; ] &amp;amp;&amp;amp; break&lt;br /&gt;
		type=${dest##70-persistent-}&lt;br /&gt;
		type=${type%%.rules}&lt;br /&gt;
		cat &amp;quot;$file&amp;quot; &amp;gt;&amp;gt; /etc/udev/rules.d/&amp;quot;$dest&amp;quot; &amp;amp;&amp;amp; rm -f &amp;quot;$file&amp;quot;&lt;br /&gt;
	done&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
eudev_start()&lt;br /&gt;
{&lt;br /&gt;
    eudev_start_pre&lt;br /&gt;
    udevd -d&lt;br /&gt;
	# store persistent-rules that got created while booting&lt;br /&gt;
	# when / was still read-only&lt;br /&gt;
	eudev_store_persistent_rules&lt;br /&gt;
	# Populating /dev with existing devices through uevents&amp;quot;&lt;br /&gt;
	udevadm trigger --type=subsystems --action=add&lt;br /&gt;
	udevadm trigger --type=devices --action=add&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then&lt;br /&gt;
    eudev_start&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18393</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18393"/>
		<updated>2020-12-31T02:40:44Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Persistent device names not available in initramfs */ update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Enable persistent device names in initramfs =&lt;br /&gt;
Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.&lt;br /&gt;
&lt;br /&gt;
Ensure {{ic|eudev}} is installed&lt;br /&gt;
 apk add eudev&lt;br /&gt;
Create {{ic|/etc/mkinitfs/features.d/eudev.files}} to add {{ic|eudev}} to initramfs.&lt;br /&gt;
 tee /etc/mkinitfs/features.d/eudev.files &amp;lt;&amp;lt; EOF&lt;br /&gt;
 /bin/udevadm&lt;br /&gt;
 /sbin/udevadm&lt;br /&gt;
 /sbin/udevd&lt;br /&gt;
 /etc/udev/*&lt;br /&gt;
 /lib/udev/*&lt;br /&gt;
 /usr/lib/libudev*&lt;br /&gt;
 EOF&lt;br /&gt;
Edit {{ic|/usr/share/mkinitfs/initramfs-init}}. After the following section, which populates {{ic|/dev}}&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
mount -t devtmpfs -o exec,nosuid,mode=0755,size=2M devtmpfs /dev 2&amp;gt;/dev/null \&lt;br /&gt;
        || mount -t tmpfs -o exec,nosuid,mode=0755,size=2M tmpfs /dev&lt;br /&gt;
                          &lt;br /&gt;
# pty device nodes (later system will need it)&lt;br /&gt;
[ -c /dev/ptmx ] || mknod -m 666 /dev/ptmx c 5 2&lt;br /&gt;
[ -d /dev/pts ] || mkdir -m 755 /dev/pts&lt;br /&gt;
mount -t devpts -o gid=5,mode=0620,noexec,nosuid devpts /dev/pts&lt;br /&gt;
                                           &lt;br /&gt;
# shared memory area (later system will need it)&lt;br /&gt;
[ -d /dev/shm ] || mkdir /dev/shm&lt;br /&gt;
mount -t tmpfs -o nodev,nosuid,noexec shm /dev/shm&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
add this paragraph.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
# persistent device names from eudev          &lt;br /&gt;
if [ -f /sbin/udevadm ]; then                      &lt;br /&gt;
        udevadm trigger --type=subsystems --action=add&lt;br /&gt;
        udevadm trigger --type=devices --action=add&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18392</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18392"/>
		<updated>2020-12-31T01:01:17Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Persistent device names not available in initramfs */ grammar&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Persistent device names not available in initramfs =&lt;br /&gt;
Currently, random block device names such as {{ic|/dev/sda}} is still used in initramfs, {{ic|/dev/disk/by-*}} are not populated until entering system.&lt;br /&gt;
&lt;br /&gt;
Still need to find a way to add eudev to initramfs.&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18391</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18391"/>
		<updated>2020-12-31T01:00:44Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ missing eudev to initramfs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
= Persistent device names not available in initramfs =&lt;br /&gt;
Currently, random block device names such as {{ic|/dev/sda}} is still used in initramfs, {{ic|/dev/disk/by-*}} are not populated until entering system.&lt;br /&gt;
&lt;br /&gt;
Still finding a way to add eudev to initramfs.&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18390</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18390"/>
		<updated>2020-12-31T00:57:46Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable ZFS services */ handled by fstab, not used&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18389</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18389"/>
		<updated>2020-12-31T00:50:49Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
 # replace underline with &amp;lt;u&amp;gt;true&amp;lt;/u&amp;gt; will fix it&lt;br /&gt;
 # if true; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18388</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18388"/>
		<updated>2020-12-31T00:43:07Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ fixes and bugs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs has a bug for entering ZFS password at boot. When booting the system, root dataset will fail to mount {{ic|sh: `active`, unknown operand}}and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
In {{ic|/usr/share/mkinitfs/initramfs-init}}:&lt;br /&gt;
 # Ask for encryption password&lt;br /&gt;
 if &amp;lt;u&amp;gt;[ $(zpool list -H -o feature@encryption $_root_pool) = &amp;quot;active&amp;quot; ]&amp;lt;/u&amp;gt;; then&lt;br /&gt;
     local _encryption_root=$(zfs get -H -o value encryptionroot $_root_vol)&lt;br /&gt;
     if [ &amp;quot;$_encryption_root&amp;quot; != &amp;quot;-&amp;quot; ]; then&lt;br /&gt;
         eval zfs load-key $_encryption_root&lt;br /&gt;
     fi               &lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Until this is fixed, we need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$UUID/ROOT/default /sysroot&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18387</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18387"/>
		<updated>2020-12-30T17:25:36Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Optional: Enable encrypted swap partition */ mkinitfs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs with {{ic|mkinitfs}}.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs $(cat /proc/cmdline | sed &#039;s|.*ZFS=||&#039; | awk &#039;{ print $1 }&#039;) /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
Note that initramfs will attempt to import all visible pools on boot, even exported ones. Be sure only pools you want are connected. See {{ic|/etc/init.d/zfs-import}}.&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18386</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18386"/>
		<updated>2020-12-30T17:17:39Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ note for initramfs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs $(cat /proc/cmdline | sed &#039;s|.*ZFS=||&#039; | awk &#039;{ print $1 }&#039;) /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
Note that initramfs will attempt to import all visible pools on boot, even exported ones. Be sure only pools you want are connected. See {{ic|/etc/init.d/zfs-import}}.&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18385</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18385"/>
		<updated>2020-12-30T16:59:50Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ disk space stat&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs $(cat /proc/cmdline | sed &#039;s|.*ZFS=||&#039; | awk &#039;{ print $1 }&#039;) /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Disk space stat =&lt;br /&gt;
Without optional swap or cryptsetup:&lt;br /&gt;
*bpool used 25.2M&lt;br /&gt;
*rpool used 491M&lt;br /&gt;
*efi used 416K&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18384</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18384"/>
		<updated>2020-12-30T16:56:18Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ use command&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs $(cat /proc/cmdline | sed &#039;s|.*ZFS=||&#039; | awk &#039;{ print $1 }&#039;) /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18383</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18383"/>
		<updated>2020-12-30T16:49:51Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable encrypted swap partition */ optional&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Optional: Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18381</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18381"/>
		<updated>2020-12-30T16:49:21Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Add normal user account */ use ash&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/sh -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18379</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18379"/>
		<updated>2020-12-30T16:48:23Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Install packages */ enable community&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
&lt;br /&gt;
{{ic|shadow}} is available in community repo. Enable it first:&lt;br /&gt;
 vi /etc/apk/repositories&lt;br /&gt;
 # uncomment community line&lt;br /&gt;
Install&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
= Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18378</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18378"/>
		<updated>2020-12-30T16:43:37Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Enable encrypted swap partition */ rebuild&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
= Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
Rebuild initramfs.&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18377</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18377"/>
		<updated>2020-12-30T16:41:16Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Format and mount EFI partition */ fs type&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
= Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18376</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18376"/>
		<updated>2020-12-30T16:39:24Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Variables */ 8 char min&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password, 8 characters min&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
= Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18375</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18375"/>
		<updated>2020-12-30T16:16:22Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Add normal user account */ swap instructions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
= Enable encrypted swap partition =&lt;br /&gt;
Install {{ic|cryptsetup}}&lt;br /&gt;
 apk add cryptsetup&lt;br /&gt;
Edit the &amp;lt;code&amp;gt;/mnt/etc/mkinitfs/mkinitfs.conf&amp;lt;/code&amp;gt; file and append the &amp;lt;code&amp;gt;cryptsetup&amp;lt;/code&amp;gt; module to the &amp;lt;code&amp;gt;features&amp;lt;/code&amp;gt; parameter:&lt;br /&gt;
 features=&amp;quot;ata base ide scsi usb virtio ext4 lvm &amp;lt;u&amp;gt;cryptsetup&amp;lt;/u&amp;gt; zfs&amp;quot;&lt;br /&gt;
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.&lt;br /&gt;
 echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 &amp;gt;&amp;gt; /etc/crypttab&lt;br /&gt;
 echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18374</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18374"/>
		<updated>2020-12-30T16:05:44Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Recovery in Live environment */ remove reference to archlinux&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /bin/sh&lt;br /&gt;
&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /boot/efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
  xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18373</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18373"/>
		<updated>2020-12-30T16:03:35Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Reboot */ recovery&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;br /&gt;
&lt;br /&gt;
= Recovery in Live environment =&lt;br /&gt;
After adding archzfs repo and installing zfs packages, run the following command:&lt;br /&gt;
&lt;br /&gt;
Create a mount point and store encryption password in a variable:&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
 ENCRYPTION_PWD=&#039;YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM&#039;&lt;br /&gt;
Find the unique UUID of your pool with&lt;br /&gt;
 zpool import&lt;br /&gt;
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.&lt;br /&gt;
 poolUUID=abc123&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT rpool_$poolUUID&lt;br /&gt;
Load encryption key&lt;br /&gt;
 echo $ENCRYPTION_PWD | zfs load-key -a&lt;br /&gt;
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use&lt;br /&gt;
 zfs list rpool_$poolUUID/ROOT&lt;br /&gt;
Mount {{ic|/}} dataset&lt;br /&gt;
 zfs mount rpool_$UUID/ROOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Mount other datasets&lt;br /&gt;
 zfs mount -a&lt;br /&gt;
Import bpool&lt;br /&gt;
 zpool import -N -R $MOUNTPOINT bpool_$UUID&lt;br /&gt;
Find and mount the {{ic|/boot}} dataset, same as above.&lt;br /&gt;
 zfs list bpool_$UUID/BOOT&lt;br /&gt;
 zfs mount bpool_$UUID/BOOT/&#039;&#039;$dataset&#039;&#039;&lt;br /&gt;
Chroot&lt;br /&gt;
 arch-chroot $MOUNTPOINT bash --login&lt;br /&gt;
After chroot, mount {{ic|/efi}}&lt;br /&gt;
 mount /efi&lt;br /&gt;
After fixing the system, don&#039;t forget to umount and export the pools:&lt;br /&gt;
 umount $MOUNTPOINT/efi&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18372</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18372"/>
		<updated>2020-12-30T15:56:28Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Finish GRUB installation */ fix order&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Apply fixes in WARNING.&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18370</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18370"/>
		<updated>2020-12-30T15:48:05Z</updated>

		<summary type="html">&lt;p&gt;R2: /* WARNING */ missing file name&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed -i &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot; /etc/grub.d/10_linux&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18369</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18369"/>
		<updated>2020-12-30T15:47:01Z</updated>

		<summary type="html">&lt;p&gt;R2: /* WARNING */ fix format&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with the method given in patch.&lt;br /&gt;
 sed &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot;&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18368</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18368"/>
		<updated>2020-12-30T15:46:06Z</updated>

		<summary type="html">&lt;p&gt;R2: /* WARNING */ update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is to replace detection of rpool with {{ic|zdb -l ${GRUB_DEVICE} | awk -F \&#039; &#039;/ name/ { print $2 }&#039;}}.&lt;br /&gt;
 sed &amp;quot;s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\&#039; &#039;\/ name\/ { print \$2 }&#039;\`/&amp;quot;&lt;br /&gt;
Need to be applied upon every GRUB update until the patch is merged.&lt;br /&gt;
== Generate grub.cfg ==&lt;br /&gt;
After applying fixes, finally run&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18367</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18367"/>
		<updated>2020-12-30T15:25:27Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Finish GRUB installation */ warning&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
== WARNING ==&lt;br /&gt;
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.&lt;br /&gt;
&lt;br /&gt;
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:&lt;br /&gt;
 GRUB_DEVICE=&amp;quot;`${grub_probe} --target=device /`&amp;quot;&lt;br /&gt;
 # will fail with `grub-probe: error: unknown filesystem.`&lt;br /&gt;
 GRUB_FS=&amp;quot;`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2&amp;gt; /dev/null || echo unknown`&amp;quot;&lt;br /&gt;
 # will also fail. The final fall back is&lt;br /&gt;
 if [ x&amp;quot;$GRUB_FS&amp;quot; = xunknown ]; then&lt;br /&gt;
     GRUB_FS=&amp;quot;$(stat -f -c %T / || echo unknown)&amp;quot;&lt;br /&gt;
 fi&lt;br /&gt;
 # `stat` from coreutils will return `zfs`, the correct answer&lt;br /&gt;
 # `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail&lt;br /&gt;
Therefore we need to install {{ic|coreutils}}.&lt;br /&gt;
 apk add coreutils&lt;br /&gt;
&lt;br /&gt;
2. GRUB will stuff an empty result if it does not support root pool.&lt;br /&gt;
&lt;br /&gt;
GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html]. A temporary fix is&lt;br /&gt;
 grub-mkconfig | sed &amp;quot;s|root=ZFS=/ROOT|root=ZFS=rpool_$poolUUID/ROOT|g&amp;quot; &amp;gt; /boot/grub/grub.cfg&lt;br /&gt;
Need to be applied upon every {{ic|grub-mkconfig}} command.&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18366</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18366"/>
		<updated>2020-12-30T14:09:40Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Finish GRUB installation */ warning&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
WARNING: as of 3.12.3, the Alpine Linux version of GRUB (not upstream version) can not properly detect ZFS root device. A temporary fix: The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18365</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18365"/>
		<updated>2020-12-30T13:35:55Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Dataset creation */ not needed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18364</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18364"/>
		<updated>2020-12-30T13:11:03Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Dataset creation */ note for legacy mountpoints&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
To be able to mount datasets with {{ic|/etc/fstab}} instead of native ZFS handling, set datasets to {{ic|1=mountpoint=legacy}}. Child datasets will inherit this property.&lt;br /&gt;
 zfs set mountpoint=legacy rpool_$poolUUID/ROOT/default&lt;br /&gt;
 zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18363</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18363"/>
		<updated>2020-12-30T13:00:35Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Preparations */ missing subtitle&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations ==&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18362</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18362"/>
		<updated>2020-12-30T12:45:28Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Dataset creation */ details for dataset layout&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
= Preparations =&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18361</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18361"/>
		<updated>2020-12-30T12:40:54Z</updated>

		<summary type="html">&lt;p&gt;R2: /* Setup live environment */ inline code&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by {{ic|setup-disk}}.&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
= Preparations =&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18360</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18360"/>
		<updated>2020-12-30T12:40:04Z</updated>

		<summary type="html">&lt;p&gt;R2: /* = Preparations */ fix title&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by&lt;br /&gt;
 setup-disk&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
= Preparations =&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Template:Text_art&amp;diff=18359</id>
		<title>Template:Text art</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Template:Text_art&amp;diff=18359"/>
		<updated>2020-12-30T12:38:46Z</updated>

		<summary type="html">&lt;p&gt;R2: copy from archwiki&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;&lt;br /&gt;
{{Template}}&lt;br /&gt;
&lt;br /&gt;
Block code that does not wrap on narrow screens, intended for [[Wikipedia:Text art|Text art]].&lt;br /&gt;
&lt;br /&gt;
Use [[Template:bc]] for regular block codes.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
{{bc|&amp;lt;nowiki&amp;gt;{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
Alpine Linux&lt;br /&gt;
&amp;amp;lt;/nowiki&amp;gt;}}&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
Alpine Linux&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&amp;lt;includeonly&amp;gt;&amp;lt;pre&amp;lt;!----&amp;gt; style=&amp;quot;white-space: pre !important; font-variant-ligatures: no-common-ligatures;&amp;quot;&amp;gt;{{{code|{{{1|{{META Error}}}}}}}}&amp;lt;/pre&amp;gt;&amp;lt;/includeonly&amp;gt;&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Template:Ic&amp;diff=18358</id>
		<title>Template:Ic</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Template:Ic&amp;diff=18358"/>
		<updated>2020-12-30T12:36:59Z</updated>

		<summary type="html">&lt;p&gt;R2: inline code&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;{{Template}} {{DISPLAYTITLE:Template:ic}}&lt;br /&gt;
&lt;br /&gt;
Inline code.&lt;br /&gt;
&lt;br /&gt;
* Use [[Template:bc]] for block code without header.&lt;br /&gt;
* Use [[Template:hc]] for block code with header.&lt;br /&gt;
&lt;br /&gt;
==Usage==&lt;br /&gt;
&lt;br /&gt;
{{bc|&amp;lt;nowiki&amp;gt;{{ic|code}}&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
{{Tip|When representing keyboard keys, you can use [[Wikipedia:List of XML and HTML character entity references|HTML entities]] {{ic|&amp;amp;amp;uarr;}}, {{ic|&amp;amp;amp;rarr;}}, {{ic|&amp;amp;amp;darr;}} and {{ic|&amp;amp;amp;larr;}} to depict arrow keys: {{ic|&amp;amp;uarr;}}, {{ic|&amp;amp;rarr;}}, {{ic|&amp;amp;darr;}}, {{ic|&amp;amp;larr;}}}}&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
&lt;br /&gt;
{{ic|code}}&amp;lt;/noinclude&amp;gt;&amp;lt;includeonly&amp;gt;&amp;lt;code&amp;gt;{{{code|{{{1|{{META Error}}}}}}}}&amp;lt;/code&amp;gt;&amp;lt;/includeonly&amp;gt;&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
	<entry>
		<id>https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18357</id>
		<title>Root on ZFS with native encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.alpinelinux.org/w/index.php?title=Root_on_ZFS_with_native_encryption&amp;diff=18357"/>
		<updated>2020-12-30T12:35:38Z</updated>

		<summary type="html">&lt;p&gt;R2: update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Objectives =&lt;br /&gt;
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.&lt;br /&gt;
&lt;br /&gt;
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.&lt;br /&gt;
&lt;br /&gt;
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.&lt;br /&gt;
&lt;br /&gt;
= Notes =&lt;br /&gt;
== Swap on ZFS will cause dead lock ==&lt;br /&gt;
You shouldn&#039;t use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.&lt;br /&gt;
&lt;br /&gt;
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.&lt;br /&gt;
&lt;br /&gt;
== Resume from ZFS will corrupt the pool ==&lt;br /&gt;
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS &#039;&#039;&#039;WILL&#039;&#039;&#039; corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]&lt;br /&gt;
== Encrypted boot pool ==&lt;br /&gt;
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.&lt;br /&gt;
&lt;br /&gt;
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.&lt;br /&gt;
&lt;br /&gt;
Since there isn&#039;t any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.&lt;br /&gt;
&lt;br /&gt;
= Pre-installation =&lt;br /&gt;
UEFI is required. Supports single disk &amp;amp; multi-disk (stripe, mirror, RAID-Z) installation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Existing data on target disk(s) will be destroyed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Download the &#039;&#039;&#039;extended&#039;&#039;&#039; release from https://www.alpinelinux.org/downloads/, it&#039;s shipped with ZFS kernel module.&lt;br /&gt;
&lt;br /&gt;
Write it to a USB and boot from it.&lt;br /&gt;
&lt;br /&gt;
== Setup live environment ==&lt;br /&gt;
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].&lt;br /&gt;
 setup-alpine&lt;br /&gt;
The settings given here will be copied to the target system later by&lt;br /&gt;
 setup-disk&lt;br /&gt;
&lt;br /&gt;
== Install system utilities ==&lt;br /&gt;
 apk update&lt;br /&gt;
 apk add eudev sgdisk grub-efi zfs&lt;br /&gt;
 modprobe zfs&lt;br /&gt;
Here we must install eudev to have persistent block device names. &#039;&#039;&#039;Do not use&#039;&#039;&#039; /dev/sda for ZFS pools.&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
 /etc/init.d/udev-trigger start&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
In this step, we will set some variables to make our installation process easier.&lt;br /&gt;
 DISK=/dev/disk/by-id/ata-HXY_120G_YS&lt;br /&gt;
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.&lt;br /&gt;
&lt;br /&gt;
Other variables&lt;br /&gt;
 TARGET_USERNAME=&#039;your username&#039;&lt;br /&gt;
 ENCRYPTION_PWD=&#039;your root pool encryption password&#039;&lt;br /&gt;
 TARGET_USERPWD=&#039;user account password&#039;&lt;br /&gt;
Create a mountpoint&lt;br /&gt;
 MOUNTPOINT=`mktemp -d`&lt;br /&gt;
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.&lt;br /&gt;
 poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2&amp;gt;/dev/null |tr -dc &#039;a-z0-9&#039; | cut -c-6)&lt;br /&gt;
&lt;br /&gt;
= Partitioning =&lt;br /&gt;
For a single disk, UEFI installation, we need to create at lease 3 partitions:&lt;br /&gt;
* EFI system partition&lt;br /&gt;
* Boot pool partition&lt;br /&gt;
* Root pool partition&lt;br /&gt;
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.&lt;br /&gt;
&lt;br /&gt;
Clear the partition table on the target disk and create EFI, boot and root pool parititions:&lt;br /&gt;
 sgdisk --zap-all $DISK&lt;br /&gt;
 sgdisk -n1:0:+512M -t1:EF00 $DISK&lt;br /&gt;
 sgdisk -n2:0:+2G $DISK        # boot pool&lt;br /&gt;
 sgdisk -n3:0:0 $DISK          # root pool&lt;br /&gt;
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.&lt;br /&gt;
&lt;br /&gt;
== Optional: Swap partition ==&lt;br /&gt;
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)&lt;br /&gt;
&lt;br /&gt;
If you want to use swap, reserve some space at the end of disk when creating root pool:&lt;br /&gt;
 sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk&lt;br /&gt;
 sgdisk -n4:0:0 $DISK          # swap partition&lt;br /&gt;
&lt;br /&gt;
= Boot and root pool creation =&lt;br /&gt;
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.&lt;br /&gt;
&lt;br /&gt;
Here we explicitly enable those GRUB can support.&lt;br /&gt;
 zpool create \&lt;br /&gt;
    -o ashift=12 -d \&lt;br /&gt;
    -o feature@async_destroy=enabled \&lt;br /&gt;
    -o feature@bookmarks=enabled \&lt;br /&gt;
    -o feature@embedded_data=enabled \&lt;br /&gt;
    -o feature@empty_bpobj=enabled \&lt;br /&gt;
    -o feature@enabled_txg=enabled \&lt;br /&gt;
    -o feature@extensible_dataset=enabled \&lt;br /&gt;
    -o feature@filesystem_limits=enabled \&lt;br /&gt;
    -o feature@hole_birth=enabled \&lt;br /&gt;
    -o feature@large_blocks=enabled \&lt;br /&gt;
    -o feature@lz4_compress=enabled \&lt;br /&gt;
    -o feature@spacemap_histogram=enabled \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \&lt;br /&gt;
    -O mountpoint=/boot -R $MOUNTPOINT \&lt;br /&gt;
    bpool_$poolUUID $DISK-part2&lt;br /&gt;
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.&lt;br /&gt;
&lt;br /&gt;
For root pool all available features are enabled by default&lt;br /&gt;
 echo $ENCRYPTION_PWD | zpool create \&lt;br /&gt;
    -o ashift=12 \&lt;br /&gt;
    -O encryption=aes-256-gcm \&lt;br /&gt;
    -O keylocation=prompt -O keyformat=passphrase \&lt;br /&gt;
    -O acltype=posixacl -O canmount=off -O compression=lz4 \&lt;br /&gt;
    -O dnodesize=auto -O normalization=formD -O relatime=on \&lt;br /&gt;
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \&lt;br /&gt;
    rpool_$poolUUID $DISK-part3&lt;br /&gt;
&lt;br /&gt;
== Notes for multi-disk ==&lt;br /&gt;
For mirror:&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    bpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part2 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part2&lt;br /&gt;
 zpool create \&lt;br /&gt;
    ... \&lt;br /&gt;
    rpool_$poolUUID mirror \&lt;br /&gt;
    /dev/disk/by-id/target_disk1-part3 \&lt;br /&gt;
    /dev/disk/by-id/target_disk2-part3&lt;br /&gt;
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.&lt;br /&gt;
&lt;br /&gt;
= Dataset creation =&lt;br /&gt;
{{Text art|&amp;lt;nowiki&amp;gt;&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT&lt;br /&gt;
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT&lt;br /&gt;
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default&lt;br /&gt;
zfs mount rpool_$poolUUID/ROOT/default&lt;br /&gt;
zfs mount bpool_$poolUUID/BOOT/default&lt;br /&gt;
d=&#039;usr var var/lib&#039;&lt;br /&gt;
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;srv usr/local&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done&lt;br /&gt;
d=&#039;log spool tmp&#039;&lt;br /&gt;
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done&lt;br /&gt;
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default&lt;br /&gt;
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root&lt;br /&gt;
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Format and mount EFI partition =&lt;br /&gt;
 mkfs.vfat -n EFI $DISK-part1&lt;br /&gt;
 mkdir $MOUNTPOINT/boot/efi&lt;br /&gt;
 mount $DISK-part1 $MOUNTPOINT/boot/efi&lt;br /&gt;
&lt;br /&gt;
= Install Alpine Linux to target disk =&lt;br /&gt;
== Preparations =&lt;br /&gt;
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.&lt;br /&gt;
 sed -i &#039;s|supported=&amp;quot;ext|supported=&amp;quot;zfs ext|g&#039; /sbin/setup-disk&lt;br /&gt;
&lt;br /&gt;
== Run setup-disk ==&lt;br /&gt;
 BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT&lt;br /&gt;
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.&lt;br /&gt;
&lt;br /&gt;
= Chroot into new system =&lt;br /&gt;
 mount --rbind /dev  $MOUNTPOINT/dev&lt;br /&gt;
 mount --rbind /proc $MOUNTPOINT/proc&lt;br /&gt;
 mount --rbind /sys  $MOUNTPOINT/sys&lt;br /&gt;
 chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh&lt;br /&gt;
&lt;br /&gt;
= Finish GRUB installation =&lt;br /&gt;
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.&lt;br /&gt;
&lt;br /&gt;
Apply GRUB ZFS fix:&lt;br /&gt;
 export ZPOOL_VDEV_NAME_PATH=YES&lt;br /&gt;
Generate grub.cfg&lt;br /&gt;
 grub-mkconfig -o /boot/grub/grub.cfg&lt;br /&gt;
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command&lt;br /&gt;
 sed -i &amp;quot;s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g&amp;quot; /boot/grub/grub.cfg&lt;br /&gt;
&lt;br /&gt;
= Install packages =&lt;br /&gt;
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.&lt;br /&gt;
 apk add shadow sudo eudev&lt;br /&gt;
&lt;br /&gt;
= Enable ZFS services =&lt;br /&gt;
 rc-update add zfs-import sysinit&lt;br /&gt;
 rc-update add zfs-mount sysinit&lt;br /&gt;
 rc-update add zfs-zed sysinit&lt;br /&gt;
 rc-update add udev-trigger sysinit&lt;br /&gt;
&lt;br /&gt;
= Enable sudo access for wheel group =&lt;br /&gt;
 mv /etc/sudoers /etc/sudoers.original&lt;br /&gt;
 tee /etc/sudoers &amp;lt;&amp;lt; EOF&lt;br /&gt;
 root ALL=(ALL) ALL&lt;br /&gt;
 %wheel ALL=(ALL) ALL&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
= Add normal user account =&lt;br /&gt;
 useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME&lt;br /&gt;
 chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME&lt;br /&gt;
 echo &amp;quot;$TARGET_USERNAME:$TARGET_USERPWD&amp;quot; | chpasswd&lt;br /&gt;
&lt;br /&gt;
= Finish installation =&lt;br /&gt;
Take a snapshot for the clean installation for future use and export all pools.&lt;br /&gt;
 exit&lt;br /&gt;
 zfs snapshot -r rpool_$poolUUID/ROOT/default@install&lt;br /&gt;
 zfs snapshot -r bpool_$poolUUID/BOOT/default@install&lt;br /&gt;
Pools must be exported before reboot, or they will fail to be imported on boot.&lt;br /&gt;
 mount | grep -v zfs | tac | grep $MOUNTPOINT | awk &#039;{print $3}&#039; | \&lt;br /&gt;
   xargs -i{} umount -lf {}&lt;br /&gt;
 zpool export bpool_$poolUUID&lt;br /&gt;
 zpool export rpool_$poolUUID&lt;br /&gt;
&lt;br /&gt;
= Reboot =&lt;br /&gt;
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.&lt;br /&gt;
&lt;br /&gt;
We need to manually load the key and mount root dataset with&lt;br /&gt;
 zfs load-key -a&lt;br /&gt;
 # enter password&lt;br /&gt;
 mount -t zfs rpool_$poolUUID/ROOT/default /sysroot&lt;br /&gt;
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].&lt;/div&gt;</summary>
		<author><name>R2</name></author>
	</entry>
</feed>