Root on ZFS with native encryption

From Alpine Linux
Revision as of 18:38, 2 January 2021 by R3 (talk | contribs) (→‎Install system utilities: details)

Objectives

This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.

Except EFI system partition and boot pool /boot, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.

To do an unencrypted setup, simply omit -O keylocation -O keyformat when creating root pool.

Notes

Swap on ZFS will cause dead lock

You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [1] This guide will set up swap on a separate partition with plain dm-crypt.

Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.

Resume from ZFS will corrupt the pool

ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS WILL corrupt the pool. See [2]

Encrypted boot pool

GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.

To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.

Since there isn't any sensitive information in /boot anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.

Pre-installation

UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.

Existing data on target disk(s) will be destroyed.

Download the extended release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.

Write it to a USB and boot from it.

Setup live environment

Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See Installation#Questions_asked_by_setup-alpine.

setup-alpine

The settings given here will be copied to the target system later by setup-disk.

Install system utilities

apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs

Here we must install eudev to have persistent block device names. Do not use /dev/sda for ZFS pools.

Now run the following command to populate persistent device names in live system:

setup-udev

Variables

In this step, we will set some variables to make our installation process easier.

DISK=/dev/disk/by-id/ata-HXY_120G_YS

Use unique disk path instead of /dev/sda to ensure the correct partition can be found by ZFS.

Other variables

TARGET_USERNAME='your username'
ENCRYPTION_PWD='your root pool encryption password, 8 characters min'
TARGET_USERPWD='user account password'

Create a mountpoint

MOUNTPOINT=`mktemp -d`

Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.

poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)

Partitioning

For a single disk, UEFI installation, we need to create at lease 3 partitions:

  • EFI system partition
  • Boot pool partition
  • Root pool partition

Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.

Clear the partition table on the target disk and create EFI, boot and root pool parititions:

sgdisk --zap-all $DISK
sgdisk -n1:0:+512M -t1:EF00 $DISK
sgdisk -n2:0:+2G $DISK        # boot pool
sgdisk -n3:0:0 $DISK          # root pool

If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.

Optional: Swap partition

Swap support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)

If you want to use swap, reserve some space at the end of disk when creating root pool:

sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
sgdisk -n4:0:0 $DISK          # swap partition

Boot and root pool creation

As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no feature@ is supplied.

Here we explicitly enable those GRUB can support.

zpool create \
   -o ashift=12 -d \
   -o feature@async_destroy=enabled \
   -o feature@bookmarks=enabled \
   -o feature@embedded_data=enabled \
   -o feature@empty_bpobj=enabled \
   -o feature@enabled_txg=enabled \
   -o feature@extensible_dataset=enabled \
   -o feature@filesystem_limits=enabled \
   -o feature@hole_birth=enabled \
   -o feature@large_blocks=enabled \
   -o feature@lz4_compress=enabled \
   -o feature@spacemap_histogram=enabled \
   -O acltype=posixacl -O canmount=off -O compression=lz4 \
   -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
   -O mountpoint=/boot -R $MOUNTPOINT \
   bpool_$poolUUID $DISK-part2

Nothing is stored directly under bpool and rpool, hence canmount=off. The respective mountpoint properties are more symbolic than practical.

For root pool all available features are enabled by default

echo $ENCRYPTION_PWD | zpool create \
   -o ashift=12 \
   -O encryption=aes-256-gcm \
   -O keylocation=prompt -O keyformat=passphrase \
   -O acltype=posixacl -O canmount=off -O compression=lz4 \
   -O dnodesize=auto -O normalization=formD -O relatime=on \
   -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
   rpool_$poolUUID $DISK-part3

Notes for multi-disk

For mirror:

zpool create \
   ... \
   bpool_$poolUUID mirror \
   /dev/disk/by-id/target_disk1-part2 \
   /dev/disk/by-id/target_disk2-part2
zpool create \
   ... \
   rpool_$poolUUID mirror \
   /dev/disk/by-id/target_disk1-part3 \
   /dev/disk/by-id/target_disk2-part3

For RAID-Z, replace mirror with raidz, raidz2 or raidz3.

Dataset creation

This layout is intended to separate root file system from persistent files. See [3] for a description.

zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default
zfs mount rpool_$poolUUID/ROOT/default
zfs mount bpool_$poolUUID/BOOT/default
d='usr var var/lib'
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
d='srv usr/local'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done
d='log spool tmp'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME

Format and mount EFI partition

mkfs.vfat -n EFI $DISK-part1
mkdir $MOUNTPOINT/boot/efi
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi

Install Alpine Linux to target disk

Preparations

GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.

export ZPOOL_VDEV_NAME_PATH=YES

setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.

sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk

Run setup-disk

BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT

Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.

Chroot into new system

m='dev proc sys'
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh

Finish GRUB installation

As GRUB installation failed half-way in #Run setup-disk, we will finish it here.

Apply GRUB ZFS fix:

echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile

Reload

source /etc/profile

Apply fixes in WARNING.

GRUB fixes

1. GRUB will fail to detect the ZFS filesystem of /boot with stat from Busybox.

See source file of grub-mkconfig, the problem is:

GRUB_DEVICE="`${grub_probe} --target=device /`"
# will fail with `grub-probe: error: unknown filesystem.`
GRUB_FS="`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2> /dev/null || echo unknown`"
# will also fail. The final fall back is
if [ x"$GRUB_FS" = xunknown ]; then
    GRUB_FS="$(stat -f -c %T / || echo unknown)"
fi
# `stat` from coreutils will return `zfs`, the correct answer
# `stat` from busybox   will return `UNKNOWN`, cause `10_linux` script to fail

Therefore we need to install coreutils.

apk add coreutils

Missing root pool

2. GRUB will stuff an empty result if it does not support root pool.

More detail see [4].

As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,

As the pool name is stored as disk label, it is possible to probe file system label and use that as root pool name

First, install util-linux:

apk add util-linux

blkid from Busybox does not support ZFS filesystem.

Replace /etc/grub.d/10_linux with underline

       fi;;
   xzfs)
       # ZFS pool name is stored as file system label
       # blkid from util-linux
       rpool=`blkid -s LABEL -o value ${GRUB_DEVICE}`
       bootfs="`make_system_path_relative_to_its_root / | sed -e "s,@$,,"`"

Or with lsblk, also from util-linux

       rpool=`lsblk -no LABEL ${GRUB_DEVICE}`

Note that some version of blkid, such as the one shipped with busybox, will return an empty result because of missing support of ZFS.

Generate grub.cfg

After applying fixes, finally run

grub-mkconfig -o /boot/grub/grub.cfg

Importing pools on boot

zpool.cache will be added to initramfs and zpool command will import pools contained in this cache.

zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID

Initramfs fixes

Fix zfs decrypt

See [5].

Enable persistent device names

Special modifications need to be made to populate /dev/disk/by-* in initramfs.

See this merge request.

With the changes in merge request applied, add eudev to /etc/mkinitfs/mkinitfs.conf.

sed -i 's|zfs|zfs eudev|' /etc/mkinitfs/mkinitfs.conf

Rebuild initramfs with

mkinitfs $(ls -1 /lib/modules/)

Mount datasets at boot

rc-update add zfs-mount sysinit
rc-update add zfs-zed sysinit # zfs monitoring

Mounting /boot dataset with fstab need mountpoint=legacy:

umount /boot/efi
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default
mount /boot
mount /boot/efi

Add normal user account

adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd

Optional: Enable encrypted swap partition

Install cryptsetup

apk add cryptsetup

Edit the /mnt/etc/mkinitfs/mkinitfs.conf file and append the cryptsetup module to the features parameter:

features="ata base ide scsi usb virtio ext4 lvm cryptsetup zfs eudev"

Add relevant lines in fstab and crypttab. Replace $DISK with actual disk.

echo swap	$DISK-part4	/dev/urandom	swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab
echo /dev/mapper/swap   none	 swap	 defaults	 0	 0 >> /etc/fstab

Rebuild initramfs with mkinitfs.

Finish installation

Take a snapshot for the clean installation for future use and export all pools.

exit
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
zfs snapshot -r bpool_$poolUUID/BOOT/default@install

Pools must be exported before reboot, or they will fail to be imported on boot.

mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
  xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID

Reboot

reboot

Disk space stat

Without optional swap or cryptsetup:

  • bpool used 25.2M
  • rpool used 491M
  • efi used 416K

Recovery in Live environment

Boot extra and install packages:

setup-alpine
apk-add zfs eudev
setup-udev
modprobe zfs

Create a mount point and store encryption password in a variable:

MOUNTPOINT=`mktemp -d`
ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'

Find the unique UUID of your pool with

zpool import

Import rpool without mounting datasets: -N for not mounting all datasets; -R for alternate root.

poolUUID=abc123
zpool import -N -R $MOUNTPOINT rpool_$poolUUID

Load encryption key

echo $ENCRYPTION_PWD | zfs load-key -a

As canmount=noauto is set for / dataset, we have to mount it manually. To find the dataset, use

zfs list rpool_$poolUUID/ROOT

Mount / dataset

zfs mount rpool_$UUID/ROOT/$dataset

Mount other datasets

zfs mount -a

Import bpool

zpool import -N -R $MOUNTPOINT bpool_$UUID

Find and mount the /boot dataset, same as above.

zfs list bpool_$UUID/BOOT
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint

Chroot

mount --rbind /dev  $MOUNTPOINT/dev
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /bin/sh

After chroot, mount /efi

mount /boot/efi

After fixing the system, don't forget to umount and export the pools:

mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
 xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID