Root on ZFS with native encryption: Difference between revisions

From Alpine Linux
(fix bold)
(update)
Line 1: Line 1:
= Setting up  Alpine Linux using ZFS with a pool that uses ZFS' native encryption capabilities =
= Objectives =
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.


== Download ==
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.


Download the '''extended''' release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel mods at the time of this writing (2020.07.10)
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.


Write it to a USB and boot from it.
= Notes =
== Swap on ZFS will cause dead lock ==
You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.


== Initial setup ==
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.


Run the following
== Resume from ZFS will corrupt the pool ==
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS '''WILL''' corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]
== Encrypted boot pool ==
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.


    setup-alpine
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.


Answer all the questions, and hit ctrl-c when promted for what disk you'd like to use.
Since there isn't any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.


== OPTIONAL ==
= Pre-installation =
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.


This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands into my terminal window from these instructions.
'''Existing data on target disk(s) will be destroyed.'''


Edit `/etc/ssh/sshd_config` and search for `Permit`. Change the value after `PermitRootLogin` to read `yes`
Download the '''extended''' release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.


save and exit to shell. Run `service sshd restart`
Write it to a USB and boot from it.
 
Now you can ssh in as root. Do not forget to go back and comment this line out when you're done since it will be enabled on your resulting machine. You will be reminded again at the end of this doc.
 
== Add needed packages  ==
 
    apk add zfs sfdisk e2fsprogs syslinux
 
== Create our partitions ==
 
We're assuming `/dev/sda` here and in the rest of the document but you can use whatever you need to. To see a list, type: `sfdisk -l`
 
    echo -e "/dev/sda1: start=1M,size=100M,bootable\n/dev/sda2: start=101M" | sfdisk --quiet --label dos /dev/sda
 
== Create device nodes ==
 
    mdev -s
 
== Create the /boot filesystem ==


    mkfs.ext4 /dev/sda1
== Setup live environment ==
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].
setup-alpine
The settings given here will be copied to the target system later by
setup-disk


== Create the root filesystem using zfs ==
== Install system utilities ==
apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs
Here we must install eudev to have persistent block device names. '''Do not use''' /dev/sda for ZFS pools.
rc-update add udev-trigger sysinit
/etc/init.d/udev-trigger start


    modprobe zfs
= Variables =
    zpool create -f -o ashift=12 \
In this step, we will set some variables to make our installation process easier.
        -O acltype=posixacl -O canmount=off -O compression=lz4 \
DISK=/dev/disk/by-id/ata-HXY_120G_YS
        -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.
        -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
        -O mountpoint=/ -R /mnt \
        rpool /dev/sda2


You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely [https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects the weakest link in this setup].
Other variables
TARGET_USERNAME='your username'
ENCRYPTION_PWD='your root pool encryption password'
TARGET_USERPWD='user account password'
Create a mountpoint
MOUNTPOINT=`mktemp -d`
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)


A few notes on the options supplied to zpool:
= Partitioning =
For a single disk, UEFI installation, we need to create at lease 3 partitions:
* EFI system partition
* Boot pool partition
* Root pool partition
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.


- `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors
Clear the partition table on the target disk and create EFI, boot and root pool parititions:
sgdisk --zap-all $DISK
sgdisk -n1:0:+512M -t1:EF00 $DISK
sgdisk -n2:0:+2G $DISK        # boot pool
sgdisk -n3:0:0 $DISK          # root pool
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.


- `acltype=posixacl` enables POSIX ACLs globally
== Optional: Swap partition ==
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)


- `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.
If you want to use swap, reserve some space at the end of disk when creating root pool:
sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
sgdisk -n4:0:0 $DISK          # swap partition


- `xattr=sa` vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don't specify this option.
= Boot and root pool creation =
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.


After completing this, confirm that the pool has been created:
Here we explicitly enable those GRUB can support.
zpool create \
    -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/boot -R $MOUNTPOINT \
    bpool_$poolUUID $DISK-part2
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.


     # zpool status
For root pool all available features are enabled by default
echo $ENCRYPTION_PWD | zpool create \
    -o ashift=12 \
    -O encryption=aes-256-gcm \
    -O keylocation=prompt -O keyformat=passphrase \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on \
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
     rpool_$poolUUID $DISK-part3


Should return something like:
== Notes for multi-disk ==
For mirror:
zpool create \
    ... \
    bpool_$poolUUID mirror \
    /dev/disk/by-id/target_disk1-part2 \
    /dev/disk/by-id/target_disk2-part2
zpool create \
    ... \
    rpool_$poolUUID mirror \
    /dev/disk/by-id/target_disk1-part3 \
    /dev/disk/by-id/target_disk2-part3
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.


      pool: rpool
= Dataset creation =
    state: ONLINE
{{Text art|<nowiki>
      scan: none requested
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
    config:
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default
zfs mount rpool_$poolUUID/ROOT/default
zfs mount bpool_$poolUUID/BOOT/default
d='usr var var/lib'
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
d='srv usr/local'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done
d='log spool tmp'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME
</nowiki>}}


        NAME        STATE    READ WRITE CKSUM
        rpool      ONLINE      0    0    0
          sda2      ONLINE      0    0    0


    errors: No known data errors
= Format and mount EFI partition =
mkfs.vfat -n EFI $DISK-part1
mkdir $MOUNTPOINT/boot/efi
mount $DISK-part1 $MOUNTPOINT/boot/efi


== Create the required datasets and mount root ==
= Install Alpine Linux to target disk =
== Preparations =
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.
export ZPOOL_VDEV_NAME_PATH=YES
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.
sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk


    zfs create -o mountpoint=none -o canmount=off rpool/ROOT
== Run setup-disk ==
    zfs create -o mountpoint=legacy rpool/ROOT/alpine
BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT
    mount -t zfs rpool/ROOT/alpine /mnt/
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.


== Mount the `/boot` filesystem ==
= Chroot into new system =
mount --rbind /dev  $MOUNTPOINT/dev
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh


    mkdir /mnt/boot/
= Finish GRUB installation =
    mount -t ext4 /dev/sda1 /mnt/boot/
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.


=== Enable ZFS' services ===
Apply GRUB ZFS fix:
export ZPOOL_VDEV_NAME_PATH=YES
Generate grub.cfg
grub-mkconfig -o /boot/grub/grub.cfg
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command
sed -i "s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g" /boot/grub/grub.cfg


    rc-update add zfs-import sysinit
= Install packages =
    rc-update add zfs-mount sysinit
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.
apk add shadow sudo eudev


== Install Alpine Linux ==
= Enable ZFS services =
rc-update add zfs-import sysinit
rc-update add zfs-mount sysinit
rc-update add zfs-zed sysinit
rc-update add udev-trigger sysinit


    setup-disk /mnt
= Enable sudo access for wheel group =
    dd if=/usr/share/syslinux/mbr.bin of=/dev/sda # write mbr so we can boot
mv /etc/sudoers /etc/sudoers.original
tee /etc/sudoers << EOF
root ALL=(ALL) ALL
%wheel ALL=(ALL) ALL
EOF


= Add normal user account =
useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME
chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd


== Reboot and enjoy! ==
= Finish installation =
Take a snapshot for the clean installation for future use and export all pools.
exit
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
zfs snapshot -r bpool_$poolUUID/BOOT/default@install
Pools must be exported before reboot, or they will fail to be imported on boot.
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
  xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID


😉
= Reboot =
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.


'''NOTE:'''
We need to manually load the key and mount root dataset with
If you went with the optional step, be sure to disable root login after you reboot.
zfs load-key -a
# enter password
mount -t zfs rpool_$poolUUID/ROOT/default /sysroot
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here].

Revision as of 12:35, 30 December 2020

Objectives

This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.

Except EFI system partition and boot pool /boot, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.

To do an unencrypted setup, simply omit -O keylocation -O keyformat when creating root pool.

Notes

Swap on ZFS will cause dead lock

You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [1] This guide will set up swap on a separate partition with plain dm-crypt.

Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.

Resume from ZFS will corrupt the pool

ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS WILL corrupt the pool. See [2]

Encrypted boot pool

GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.

To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.

Since there isn't any sensitive information in /boot anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.

Pre-installation

UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.

Existing data on target disk(s) will be destroyed.

Download the extended release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.

Write it to a USB and boot from it.

Setup live environment

Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See Installation#Questions_asked_by_setup-alpine.

setup-alpine

The settings given here will be copied to the target system later by

setup-disk

Install system utilities

apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs

Here we must install eudev to have persistent block device names. Do not use /dev/sda for ZFS pools.

rc-update add udev-trigger sysinit
/etc/init.d/udev-trigger start

Variables

In this step, we will set some variables to make our installation process easier.

DISK=/dev/disk/by-id/ata-HXY_120G_YS

Use unique disk path instead of /dev/sda to ensure the correct partition can be found by ZFS.

Other variables

TARGET_USERNAME='your username'
ENCRYPTION_PWD='your root pool encryption password'
TARGET_USERPWD='user account password'

Create a mountpoint

MOUNTPOINT=`mktemp -d`

Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.

poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)

Partitioning

For a single disk, UEFI installation, we need to create at lease 3 partitions:

  • EFI system partition
  • Boot pool partition
  • Root pool partition

Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.

Clear the partition table on the target disk and create EFI, boot and root pool parititions:

sgdisk --zap-all $DISK
sgdisk -n1:0:+512M -t1:EF00 $DISK
sgdisk -n2:0:+2G $DISK        # boot pool
sgdisk -n3:0:0 $DISK          # root pool

If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.

Optional: Swap partition

Swap support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)

If you want to use swap, reserve some space at the end of disk when creating root pool:

sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
sgdisk -n4:0:0 $DISK          # swap partition

Boot and root pool creation

As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no feature@ is supplied.

Here we explicitly enable those GRUB can support.

zpool create \
   -o ashift=12 -d \
   -o feature@async_destroy=enabled \
   -o feature@bookmarks=enabled \
   -o feature@embedded_data=enabled \
   -o feature@empty_bpobj=enabled \
   -o feature@enabled_txg=enabled \
   -o feature@extensible_dataset=enabled \
   -o feature@filesystem_limits=enabled \
   -o feature@hole_birth=enabled \
   -o feature@large_blocks=enabled \
   -o feature@lz4_compress=enabled \
   -o feature@spacemap_histogram=enabled \
   -O acltype=posixacl -O canmount=off -O compression=lz4 \
   -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
   -O mountpoint=/boot -R $MOUNTPOINT \
   bpool_$poolUUID $DISK-part2

Nothing is stored directly under bpool and rpool, hence canmount=off. The respective mountpoint properties are more symbolic than practical.

For root pool all available features are enabled by default

echo $ENCRYPTION_PWD | zpool create \
   -o ashift=12 \
   -O encryption=aes-256-gcm \
   -O keylocation=prompt -O keyformat=passphrase \
   -O acltype=posixacl -O canmount=off -O compression=lz4 \
   -O dnodesize=auto -O normalization=formD -O relatime=on \
   -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
   rpool_$poolUUID $DISK-part3

Notes for multi-disk

For mirror:

zpool create \
   ... \
   bpool_$poolUUID mirror \
   /dev/disk/by-id/target_disk1-part2 \
   /dev/disk/by-id/target_disk2-part2
zpool create \
   ... \
   rpool_$poolUUID mirror \
   /dev/disk/by-id/target_disk1-part3 \
   /dev/disk/by-id/target_disk2-part3

For RAID-Z, replace mirror with raidz, raidz2 or raidz3.

Dataset creation

zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default
zfs mount rpool_$poolUUID/ROOT/default
zfs mount bpool_$poolUUID/BOOT/default
d='usr var var/lib'
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
d='srv usr/local'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done
d='log spool tmp'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME


Format and mount EFI partition

mkfs.vfat -n EFI $DISK-part1
mkdir $MOUNTPOINT/boot/efi
mount $DISK-part1 $MOUNTPOINT/boot/efi

Install Alpine Linux to target disk

= Preparations

GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.

export ZPOOL_VDEV_NAME_PATH=YES

setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.

sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk

Run setup-disk

BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT

Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.

Chroot into new system

mount --rbind /dev  $MOUNTPOINT/dev
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh

Finish GRUB installation

As GRUB installation failed half-way in #Run setup-disk, we will finish it here.

Apply GRUB ZFS fix:

export ZPOOL_VDEV_NAME_PATH=YES

Generate grub.cfg

grub-mkconfig -o /boot/grub/grub.cfg

The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command

sed -i "s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g" /boot/grub/grub.cfg

Install packages

These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.

apk add shadow sudo eudev

Enable ZFS services

rc-update add zfs-import sysinit
rc-update add zfs-mount sysinit
rc-update add zfs-zed sysinit
rc-update add udev-trigger sysinit

Enable sudo access for wheel group

mv /etc/sudoers /etc/sudoers.original
tee /etc/sudoers << EOF
root ALL=(ALL) ALL
%wheel ALL=(ALL) ALL
EOF

Add normal user account

useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME
chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd

Finish installation

Take a snapshot for the clean installation for future use and export all pools.

exit
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
zfs snapshot -r bpool_$poolUUID/BOOT/default@install

Pools must be exported before reboot, or they will fail to be imported on boot.

mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
  xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID

Reboot

As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.

We need to manually load the key and mount root dataset with

zfs load-key -a
# enter password
mount -t zfs rpool_$poolUUID/ROOT/default /sysroot

ArchZFS project solved this with a sh script, available here.