Root on ZFS with native encryption: Difference between revisions

From Alpine Linux
(14 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Useful links =
= Setting up  Alpine Linux using ZFS with a pool that uses ZFS' native encryption capabilities =
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]


= Objectives =
== Download ==
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.


Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.
Download the '''extended''' release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel mods at the time of this writing (2020.07.10)


To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.
Write it to a USB device and boot from it.


= Notes =
== Initial setup ==
== Swap on ZFS will cause dead lock ==
You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.


Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.
Run the following


== Resume from ZFS will corrupt the pool ==
    setup-alpine
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS '''WILL''' corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]
== Encrypted boot pool ==
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.


To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.
Answer all the questions, and hit ctrl-c when promted for which disk you'd like to use.


Since there isn't any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.
== OPTIONAL ==


== DO NOT set bootfs property! ==
This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands from these instructions into a terminal window.
Do not set {{ic|bootfs}} on any pool!


It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB '''INVALID'''.
Edit `/etc/ssh/sshd_config` and search for `Permit`. Change the value after `PermitRootLogin` to read `yes`


As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.
save and exit to shell. Run `service sshd restart`


Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].
Now you can ssh in as root. Do not forget to go back and comment this line out when you're done since it will be enabled on the resulting machine. You will be reminded again at the end of this doc.


= Pre-installation =
== Add needed packages  ==
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.


'''Existing data on target disk(s) will be destroyed.'''
    apk add zfs sfdisk e2fsprogs syslinux


Download the '''extended''' release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.
== Create our partitions ==


Write it to a USB and boot from it.
We're assuming `/dev/sda` here and in the rest of the document, but you can use whatever you need to. To see a list, type: `sfdisk -l`


== Setup live environment ==
    echo -e "/dev/sda1: start=1M,size=100M,bootable\n/dev/sda2: start=101M" | sfdisk --quiet --label dos /dev/sda
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].
setup-alpine
The settings given here will be copied to the target system later by {{ic|setup-disk}}.


== Install system utilities ==
== Create device nodes ==
apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs
Here we must install eudev to have persistent block device names. '''Do not use''' /dev/sda for ZFS pools.


Now run the following command to populate persistent device names in live system:
    mdev -s
setup-udev


= Variables =
== Create the /boot filesystem ==
In this step, we will set some variables to make our installation process easier.
DISK=/dev/disk/by-id/ata-HXY_120G_YS
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.


Other variables
    mkfs.ext4 /dev/sda1
TARGET_USERNAME='your username'
ENCRYPTION_PWD='your root pool encryption password, 8 characters min'
TARGET_USERPWD='user account password'
Create a mountpoint
MOUNTPOINT=`mktemp -d`
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)


= Partitioning =
== Create the root filesystem using zfs ==
For a single disk, UEFI installation, we need to create at lease 3 partitions:
* EFI system partition
* Boot pool partition
* Root pool partition
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.


Clear the partition table on the target disk and create EFI, boot and root pool parititions:
    modprobe zfs
sgdisk --zap-all $DISK
    zpool create -f -o ashift=12 \
sgdisk -n1:0:+512M -t1:EF00 $DISK
        -O acltype=posixacl -O canmount=off -O compression=lz4 \
sgdisk -n2:0:+2G $DISK        # boot pool
        -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
sgdisk -n3:0:0 $DISK          # root pool
        -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.
        -O mountpoint=/ -R /mnt \
        rpool /dev/sda2


== Optional: Swap partition ==
You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely [https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects the weakest link in this setup].
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)


If you want to use swap, reserve some space at the end of disk when creating root pool:
A few notes on the options supplied to zpool:
sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
sgdisk -n4:0:0 $DISK          # swap partition


= Boot and root pool creation =
- `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.


Here we explicitly enable those GRUB can support.
- `acltype=posixacl` enables POSIX ACLs globally
zpool create \
    -o ashift=12 -d \
    -o feature@async_destroy=enabled \
    -o feature@bookmarks=enabled \
    -o feature@embedded_data=enabled \
    -o feature@empty_bpobj=enabled \
    -o feature@enabled_txg=enabled \
    -o feature@extensible_dataset=enabled \
    -o feature@filesystem_limits=enabled \
    -o feature@hole_birth=enabled \
    -o feature@large_blocks=enabled \
    -o feature@lz4_compress=enabled \
    -o feature@spacemap_histogram=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    -O mountpoint=/boot -R $MOUNTPOINT \
    bpool_$poolUUID $DISK-part2
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.


For root pool all available features are enabled by default
- `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.
echo $ENCRYPTION_PWD | zpool create \
    -o ashift=12 \
    -O encryption=aes-256-gcm \
    -O keylocation=prompt -O keyformat=passphrase \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on \
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
    rpool_$poolUUID $DISK-part3


== Notes for multi-disk ==
- `xattr=sa` vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don't specify this option.
For mirror:
zpool create \
    ... \
    bpool_$poolUUID mirror \
    /dev/disk/by-id/target_disk1-part2 \
    /dev/disk/by-id/target_disk2-part2
zpool create \
    ... \
    rpool_$poolUUID mirror \
    /dev/disk/by-id/target_disk1-part3 \
    /dev/disk/by-id/target_disk2-part3
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.


= Dataset creation =
After completing this, confirm that the pool has been created:
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.
{{Text art|<nowiki>
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default
zfs mount rpool_$poolUUID/ROOT/default
zfs mount bpool_$poolUUID/BOOT/default
d='usr var var/lib'
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
d='srv usr/local'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done
d='log spool tmp'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME
</nowiki>}}
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)


Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.
    # zpool status
d='libvirt lxc docker'
for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.


= Format and mount EFI partition =
Should return something like:
mkfs.vfat -n EFI $DISK-part1
mkdir $MOUNTPOINT/boot/efi
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].


= Install Alpine Linux to target disk =
      pool: rpool
== Preparations ==
    state: ONLINE
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.
      scan: none requested
export ZPOOL_VDEV_NAME_PATH=YES
    config:
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.
sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk


== Run setup-disk ==
        NAME        STATE    READ WRITE CKSUM
BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT
        rpool      ONLINE      0    0    0
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.
          sda2      ONLINE      0    0    0


= Chroot into new system =
    errors: No known data errors
m='dev proc sys'
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh


= Finish GRUB installation =
== Create the required datasets and mount root ==
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.


Apply GRUB ZFS fix:
    zfs create -o mountpoint=none -o canmount=off rpool/ROOT
echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
    zfs create -o mountpoint=legacy rpool/ROOT/alpine
Reload
    mount -t zfs rpool/ROOT/alpine /mnt/
source /etc/profile
Apply fixes in [[#GRUB fixes]].
== GRUB fixes ==
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:
GRUB_DEVICE="`${grub_probe} --target=device /`"
# will fail with `grub-probe: error: unknown filesystem.`
GRUB_FS="`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2> /dev/null || echo unknown`"
# will also fail. The final fall back is
if [ x"$GRUB_FS" = xunknown ]; then
    GRUB_FS="$(stat -f -c %T / || echo unknown)"
fi
# `stat` from coreutils will return `zfs`, the correct answer
# `stat` from BusyBox  will return `UNKNOWN`, cause `10_linux` script to fail
Therefore we need to install {{ic|coreutils}}.
apk add coreutils
=== Missing root pool ===
2. GRUB will stuff an empty result if it does not support root pool.
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.


Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}
== Mount the `/boot` filesystem ==
sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|"  /etc/grub.d/10_linux


== Generate grub.cfg ==
    mkdir /mnt/boot/
After applying fixes, finally run
    mount -t ext4 /dev/sda1 /mnt/boot/
grub-mkconfig -o /boot/grub/grub.cfg


= Importing pools on boot =
=== Enable ZFS' services ===
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.


System will fail to boot without this.
    rc-update add zfs-import sysinit
zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID
    rc-update add zfs-mount sysinit
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID


= Initramfs fixes =
== Install Alpine Linux ==
== Fix zfs decrypt ==
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].
== Enable persistent device names ==
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].


With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.
    setup-disk /mnt
sed -i 's|zfs|zfs eudev|' /etc/mkinitfs/mkinitfs.conf
    dd if=/usr/share/syslinux/mbr.bin of=/dev/sda # write mbr so we can boot
Rebuild initramfs with
mkinitfs $(ls -1 /lib/modules/)


= Mount datasets at boot =
rc-update add zfs-mount sysinit
rc-update add zfs-zed sysinit # zfs monitoring
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:
umount /boot/efi
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default
mount /boot
mount /boot/efi


= Add normal user account =
== Reboot and enjoy! ==
adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd
Root account is accessed via {{ic|su}} command with root password.


Optionally install {{ic|sudo}} to disable root password and use user's own password instead.
😉


= Boot environment manager =
'''NOTE:'''
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.
If you went with the optional step, be sure to disable root login after you reboot.
 
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.
 
= Optional: Desktop Environment =
See [[#Wayland-based_lightweight_desktop]].
 
= Optional: Enable encrypted swap partition =
Install {{ic|cryptsetup}}
apk add cryptsetup
Edit the <code>/etc/mkinitfs/mkinitfs.conf</code> file and append the <code>cryptsetup</code> module to the <code>features</code> parameter:
features="ata base ide scsi usb virtio ext4 lvm <u>cryptsetup</u> zfs eudev"
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.
echo swap $DISK-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab
echo /dev/mapper/swap  none swap defaults 0 0 >> /etc/fstab
Rebuild initramfs with {{ic|mkinitfs}}.
 
= Finish installation =
Take a snapshot for the clean installation for future use and export all pools.
exit
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
zfs snapshot -r bpool_$poolUUID/BOOT/default@install
Pools must be exported before reboot, or they will fail to be imported on boot.
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
  xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID
 
= Reboot =
 
reboot
= Disk space stat =
== Barebone ==
Without optional swap or cryptsetup:
*bpool used 25.2M
*rpool used 491M
*efi used 416K
== Wayland-based lightweight desktop ==
This setup is based on Sway Window Manager and Qt apps.
 
Encrypted swap
apk add cryptsetup
Sway Window Manager and basic utilities
apk add sway swayidle swaylock grim i3status
Terminal
apk add alacritty
Sound
apk add alsa-utils
Utilities
apk add vim mutt isync lynx git p7zip proxychains-ng
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer
apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler
Play videos with hardware accelerated decoding
apk add mpv youtube-dl libva-intel-driver
Firefox
apk add firefox-esr
Add MTP (connect to Android phones) and samba support to file manager
apk add gvfs-smb gvfs-mtp
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons
apk add gnome-themes-extra
Stat
*rpool used 1.11G
*bpool used 26.6M
 
= Recovery in Live environment =
Boot Live environment (extended release) and install packages:
setup-alpine      # basic settings: keyboard layout, timezone ...
apk-add zfs eudev # zfs-utils and persistent device name support
setup-udev        # populate persistent names
modprobe zfs      # load kernel module
Create a mount point and store encryption password in a variable:
MOUNTPOINT=`mktemp -d`
ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'
Find the unique UUID of your pool with
zpool import
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.
poolUUID=abc123
zpool import -N -R $MOUNTPOINT rpool_$poolUUID
Load encryption key
echo $ENCRYPTION_PWD | zfs load-key -a
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use
zfs list rpool_$poolUUID/ROOT
Mount {{ic|/}} dataset
zfs mount rpool_$UUID/ROOT/''$dataset''
Mount other datasets
zfs mount -a
Import bpool
zpool import -N -R $MOUNTPOINT bpool_$UUID
Find and mount the {{ic|/boot}} dataset, same as above.
zfs list bpool_$UUID/BOOT
mount -t zfs bpool_$UUID/BOOT/''$dataset'' $MOUNTPOINT/boot # legacy mountpoint
Chroot
mount --rbind /dev  $MOUNTPOINT/dev
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /bin/sh
 
After chroot, mount {{ic|/efi}}
mount /boot/efi
After fixing the system, don't forget to umount and export the pools:
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
  xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID

Revision as of 04:27, 5 July 2021

Setting up Alpine Linux using ZFS with a pool that uses ZFS' native encryption capabilities

Download

Download the extended release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel mods at the time of this writing (2020.07.10)

Write it to a USB device and boot from it.

Initial setup

Run the following

   setup-alpine

Answer all the questions, and hit ctrl-c when promted for which disk you'd like to use.

OPTIONAL

This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands from these instructions into a terminal window.

Edit `/etc/ssh/sshd_config` and search for `Permit`. Change the value after `PermitRootLogin` to read `yes`

save and exit to shell. Run `service sshd restart`

Now you can ssh in as root. Do not forget to go back and comment this line out when you're done since it will be enabled on the resulting machine. You will be reminded again at the end of this doc.

Add needed packages

   apk add zfs sfdisk e2fsprogs syslinux

Create our partitions

We're assuming `/dev/sda` here and in the rest of the document, but you can use whatever you need to. To see a list, type: `sfdisk -l`

   echo -e "/dev/sda1: start=1M,size=100M,bootable\n/dev/sda2: start=101M" | sfdisk --quiet --label dos /dev/sda

Create device nodes

   mdev -s

Create the /boot filesystem

   mkfs.ext4 /dev/sda1

Create the root filesystem using zfs

   modprobe zfs
   zpool create -f -o ashift=12 \
       -O acltype=posixacl -O canmount=off -O compression=lz4 \
       -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
       -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
       -O mountpoint=/ -R /mnt \
       rpool /dev/sda2

You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely the weakest link in this setup.

A few notes on the options supplied to zpool:

- `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors

- `acltype=posixacl` enables POSIX ACLs globally

- `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.

- `xattr=sa` vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don't specify this option.

After completing this, confirm that the pool has been created:

   # zpool status

Should return something like:

     pool: rpool
    state: ONLINE
     scan: none requested
   config:
       NAME        STATE     READ WRITE CKSUM
       rpool       ONLINE       0     0     0
         sda2      ONLINE       0     0     0
   errors: No known data errors

Create the required datasets and mount root

   zfs create -o mountpoint=none -o canmount=off rpool/ROOT
   zfs create -o mountpoint=legacy rpool/ROOT/alpine
   mount -t zfs rpool/ROOT/alpine /mnt/

Mount the `/boot` filesystem

   mkdir /mnt/boot/
   mount -t ext4 /dev/sda1 /mnt/boot/

Enable ZFS' services

   rc-update add zfs-import sysinit
   rc-update add zfs-mount sysinit

Install Alpine Linux

   setup-disk /mnt
   dd if=/usr/share/syslinux/mbr.bin of=/dev/sda # write mbr so we can boot


Reboot and enjoy!

😉

NOTE: If you went with the optional step, be sure to disable root login after you reboot.