Root on ZFS with native encryption: Difference between revisions
(fix bold) |
(update) |
||
Line 1: | Line 1: | ||
= | = Objectives = | ||
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported. | |||
Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt. | |||
To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool. | |||
= Notes = | |||
== Swap on ZFS will cause dead lock == | |||
You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt. | |||
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service. | |||
== Resume from ZFS will corrupt the pool == | |||
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS '''WILL''' corrupt the pool. See [https://github.com/openzfs/zfs/issues/260] | |||
== Encrypted boot pool == | |||
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption. | |||
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs. | |||
Since there isn't any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process. | |||
== | = Pre-installation = | ||
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation. | |||
'''Existing data on target disk(s) will be destroyed.''' | |||
Download the '''extended''' release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module. | |||
Write it to a USB and boot from it. | |||
== Setup live environment == | |||
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]]. | |||
setup-alpine | |||
The settings given here will be copied to the target system later by | |||
setup-disk | |||
== | == Install system utilities == | ||
apk update | |||
apk add eudev sgdisk grub-efi zfs | |||
modprobe zfs | |||
Here we must install eudev to have persistent block device names. '''Do not use''' /dev/sda for ZFS pools. | |||
rc-update add udev-trigger sysinit | |||
/etc/init.d/udev-trigger start | |||
= Variables = | |||
In this step, we will set some variables to make our installation process easier. | |||
DISK=/dev/disk/by-id/ata-HXY_120G_YS | |||
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS. | |||
Other variables | |||
TARGET_USERNAME='your username' | |||
ENCRYPTION_PWD='your root pool encryption password' | |||
TARGET_USERPWD='user account password' | |||
Create a mountpoint | |||
MOUNTPOINT=`mktemp -d` | |||
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system. | |||
poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6) | |||
= Partitioning = | |||
For a single disk, UEFI installation, we need to create at lease 3 partitions: | |||
* EFI system partition | |||
* Boot pool partition | |||
* Root pool partition | |||
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS. | |||
- | Clear the partition table on the target disk and create EFI, boot and root pool parititions: | ||
sgdisk --zap-all $DISK | |||
sgdisk -n1:0:+512M -t1:EF00 $DISK | |||
sgdisk -n2:0:+2G $DISK # boot pool | |||
sgdisk -n3:0:0 $DISK # root pool | |||
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above. | |||
== Optional: Swap partition == | |||
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.) | |||
- | If you want to use swap, reserve some space at the end of disk when creating root pool: | ||
sgdisk -n3:0:-8G $DISK # root pool, reserve 8GB for swap at the end of the disk | |||
sgdisk -n4:0:0 $DISK # swap partition | |||
= Boot and root pool creation = | |||
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied. | |||
Here we explicitly enable those GRUB can support. | |||
zpool create \ | |||
-o ashift=12 -d \ | |||
-o feature@async_destroy=enabled \ | |||
-o feature@bookmarks=enabled \ | |||
-o feature@embedded_data=enabled \ | |||
-o feature@empty_bpobj=enabled \ | |||
-o feature@enabled_txg=enabled \ | |||
-o feature@extensible_dataset=enabled \ | |||
-o feature@filesystem_limits=enabled \ | |||
-o feature@hole_birth=enabled \ | |||
-o feature@large_blocks=enabled \ | |||
-o feature@lz4_compress=enabled \ | |||
-o feature@spacemap_histogram=enabled \ | |||
-O acltype=posixacl -O canmount=off -O compression=lz4 \ | |||
-O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ | |||
-O mountpoint=/boot -R $MOUNTPOINT \ | |||
bpool_$poolUUID $DISK-part2 | |||
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical. | |||
For root pool all available features are enabled by default | |||
echo $ENCRYPTION_PWD | zpool create \ | |||
-o ashift=12 \ | |||
-O encryption=aes-256-gcm \ | |||
-O keylocation=prompt -O keyformat=passphrase \ | |||
-O acltype=posixacl -O canmount=off -O compression=lz4 \ | |||
-O dnodesize=auto -O normalization=formD -O relatime=on \ | |||
-O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \ | |||
rpool_$poolUUID $DISK-part3 | |||
== Notes for multi-disk == | |||
For mirror: | |||
zpool create \ | |||
... \ | |||
bpool_$poolUUID mirror \ | |||
/dev/disk/by-id/target_disk1-part2 \ | |||
/dev/disk/by-id/target_disk2-part2 | |||
zpool create \ | |||
... \ | |||
rpool_$poolUUID mirror \ | |||
/dev/disk/by-id/target_disk1-part3 \ | |||
/dev/disk/by-id/target_disk2-part3 | |||
For RAID-Z, replace mirror with raidz, raidz2 or raidz3. | |||
= Dataset creation = | |||
{{Text art|<nowiki> | |||
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME | |||
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT | |||
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT | |||
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default | |||
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default | |||
zfs mount rpool_$poolUUID/ROOT/default | |||
zfs mount bpool_$poolUUID/BOOT/default | |||
d='usr var var/lib' | |||
for i in $d; do zfs create -o canmount=off rpool_$poolUUID/ROOT/default/$i; done | |||
d='srv usr/local' | |||
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done | |||
d='log spool tmp' | |||
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done | |||
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default | |||
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root | |||
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME | |||
</nowiki>}} | |||
= Format and mount EFI partition = | |||
mkfs.vfat -n EFI $DISK-part1 | |||
mkdir $MOUNTPOINT/boot/efi | |||
mount $DISK-part1 $MOUNTPOINT/boot/efi | |||
== | = Install Alpine Linux to target disk = | ||
== Preparations = | |||
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES. | |||
export ZPOOL_VDEV_NAME_PATH=YES | |||
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array. | |||
sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk | |||
== Run setup-disk == | |||
BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT | |||
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot. | |||
== | = Chroot into new system = | ||
mount --rbind /dev $MOUNTPOINT/dev | |||
mount --rbind /proc $MOUNTPOINT/proc | |||
mount --rbind /sys $MOUNTPOINT/sys | |||
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh | |||
= Finish GRUB installation = | |||
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here. | |||
=== | Apply GRUB ZFS fix: | ||
export ZPOOL_VDEV_NAME_PATH=YES | |||
Generate grub.cfg | |||
grub-mkconfig -o /boot/grub/grub.cfg | |||
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command | |||
sed -i "s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g" /boot/grub/grub.cfg | |||
= Install packages = | |||
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed. | |||
apk add shadow sudo eudev | |||
== | = Enable ZFS services = | ||
rc-update add zfs-import sysinit | |||
rc-update add zfs-mount sysinit | |||
rc-update add zfs-zed sysinit | |||
rc-update add udev-trigger sysinit | |||
= Enable sudo access for wheel group = | |||
mv /etc/sudoers /etc/sudoers.original | |||
tee /etc/sudoers << EOF | |||
root ALL=(ALL) ALL | |||
%wheel ALL=(ALL) ALL | |||
EOF | |||
= Add normal user account = | |||
useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME | |||
chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME | |||
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd | |||
== | = Finish installation = | ||
Take a snapshot for the clean installation for future use and export all pools. | |||
exit | |||
zfs snapshot -r rpool_$poolUUID/ROOT/default@install | |||
zfs snapshot -r bpool_$poolUUID/BOOT/default@install | |||
Pools must be exported before reboot, or they will fail to be imported on boot. | |||
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \ | |||
xargs -i{} umount -lf {} | |||
zpool export bpool_$poolUUID | |||
zpool export rpool_$poolUUID | |||
= Reboot = | |||
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell. | |||
We need to manually load the key and mount root dataset with | |||
zfs load-key -a | |||
# enter password | |||
mount -t zfs rpool_$poolUUID/ROOT/default /sysroot | |||
ArchZFS project solved this with a sh script, available [https://github.com/archzfs/archzfs/blob/master/src/zfs-utils/zfs-utils.initcpio.hook here]. |
Revision as of 12:35, 30 December 2020
Objectives
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.
Except EFI system partition and boot pool /boot
, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.
To do an unencrypted setup, simply omit -O keylocation -O keyformat
when creating root pool.
Notes
Swap on ZFS will cause dead lock
You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [1] This guide will set up swap on a separate partition with plain dm-crypt.
Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.
Resume from ZFS will corrupt the pool
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS WILL corrupt the pool. See [2]
Encrypted boot pool
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.
To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.
Since there isn't any sensitive information in /boot
anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.
Pre-installation
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.
Existing data on target disk(s) will be destroyed.
Download the extended release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.
Write it to a USB and boot from it.
Setup live environment
Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See Installation#Questions_asked_by_setup-alpine.
setup-alpine
The settings given here will be copied to the target system later by
setup-disk
Install system utilities
apk update apk add eudev sgdisk grub-efi zfs modprobe zfs
Here we must install eudev to have persistent block device names. Do not use /dev/sda for ZFS pools.
rc-update add udev-trigger sysinit /etc/init.d/udev-trigger start
Variables
In this step, we will set some variables to make our installation process easier.
DISK=/dev/disk/by-id/ata-HXY_120G_YS
Use unique disk path instead of /dev/sda
to ensure the correct partition can be found by ZFS.
Other variables
TARGET_USERNAME='your username' ENCRYPTION_PWD='your root pool encryption password' TARGET_USERPWD='user account password'
Create a mountpoint
MOUNTPOINT=`mktemp -d`
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)
Partitioning
For a single disk, UEFI installation, we need to create at lease 3 partitions:
- EFI system partition
- Boot pool partition
- Root pool partition
Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.
Clear the partition table on the target disk and create EFI, boot and root pool parititions:
sgdisk --zap-all $DISK sgdisk -n1:0:+512M -t1:EF00 $DISK sgdisk -n2:0:+2G $DISK # boot pool sgdisk -n3:0:0 $DISK # root pool
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.
Optional: Swap partition
Swap support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)
If you want to use swap, reserve some space at the end of disk when creating root pool:
sgdisk -n3:0:-8G $DISK # root pool, reserve 8GB for swap at the end of the disk sgdisk -n4:0:0 $DISK # swap partition
Boot and root pool creation
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no feature@
is supplied.
Here we explicitly enable those GRUB can support.
zpool create \ -o ashift=12 -d \ -o feature@async_destroy=enabled \ -o feature@bookmarks=enabled \ -o feature@embedded_data=enabled \ -o feature@empty_bpobj=enabled \ -o feature@enabled_txg=enabled \ -o feature@extensible_dataset=enabled \ -o feature@filesystem_limits=enabled \ -o feature@hole_birth=enabled \ -o feature@large_blocks=enabled \ -o feature@lz4_compress=enabled \ -o feature@spacemap_histogram=enabled \ -O acltype=posixacl -O canmount=off -O compression=lz4 \ -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ -O mountpoint=/boot -R $MOUNTPOINT \ bpool_$poolUUID $DISK-part2
Nothing is stored directly under bpool and rpool, hence canmount=off
. The respective mountpoint
properties are more symbolic than practical.
For root pool all available features are enabled by default
echo $ENCRYPTION_PWD | zpool create \ -o ashift=12 \ -O encryption=aes-256-gcm \ -O keylocation=prompt -O keyformat=passphrase \ -O acltype=posixacl -O canmount=off -O compression=lz4 \ -O dnodesize=auto -O normalization=formD -O relatime=on \ -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \ rpool_$poolUUID $DISK-part3
Notes for multi-disk
For mirror:
zpool create \ ... \ bpool_$poolUUID mirror \ /dev/disk/by-id/target_disk1-part2 \ /dev/disk/by-id/target_disk2-part2 zpool create \ ... \ rpool_$poolUUID mirror \ /dev/disk/by-id/target_disk1-part3 \ /dev/disk/by-id/target_disk2-part3
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.
Dataset creation
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default zfs mount rpool_$poolUUID/ROOT/default zfs mount bpool_$poolUUID/BOOT/default d='usr var var/lib' for i in $d; do zfs create -o canmount=off rpool_$poolUUID/ROOT/default/$i; done d='srv usr/local' for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done d='log spool tmp' for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME
Format and mount EFI partition
mkfs.vfat -n EFI $DISK-part1 mkdir $MOUNTPOINT/boot/efi mount $DISK-part1 $MOUNTPOINT/boot/efi
Install Alpine Linux to target disk
= Preparations
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.
export ZPOOL_VDEV_NAME_PATH=YES
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.
sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk
Run setup-disk
BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.
Chroot into new system
mount --rbind /dev $MOUNTPOINT/dev mount --rbind /proc $MOUNTPOINT/proc mount --rbind /sys $MOUNTPOINT/sys chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh
Finish GRUB installation
As GRUB installation failed half-way in #Run setup-disk, we will finish it here.
Apply GRUB ZFS fix:
export ZPOOL_VDEV_NAME_PATH=YES
Generate grub.cfg
grub-mkconfig -o /boot/grub/grub.cfg
The correct root device, rpool_$poolUUID/ROOT/default is missing from grub.cfg, fix with a sed command
sed -i "s|root=PARTUUID.*|root=ZFS=rpool_$poolUUID/ROOT/default|g" /boot/grub/grub.cfg
Install packages
These packages are used for creating a common user account. Root account is accessed with sudo. Also package for persisitent block device name must be installed.
apk add shadow sudo eudev
Enable ZFS services
rc-update add zfs-import sysinit rc-update add zfs-mount sysinit rc-update add zfs-zed sysinit rc-update add udev-trigger sysinit
Enable sudo access for wheel group
mv /etc/sudoers /etc/sudoers.original tee /etc/sudoers << EOF root ALL=(ALL) ALL %wheel ALL=(ALL) ALL EOF
Add normal user account
useradd -s /bin/bash -U -G wheel,video -d /home/$TARGET_USERNAME $TARGET_USERNAME chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd
Finish installation
Take a snapshot for the clean installation for future use and export all pools.
exit zfs snapshot -r rpool_$poolUUID/ROOT/default@install zfs snapshot -r bpool_$poolUUID/BOOT/default@install
Pools must be exported before reboot, or they will fail to be imported on boot.
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \ xargs -i{} umount -lf {} zpool export bpool_$poolUUID zpool export rpool_$poolUUID
Reboot
As of this writing, the initramfs lacks support for entering ZFS password at boot. When booting the system, root dataset will simply fail to mount and drop into emergency shell.
We need to manually load the key and mount root dataset with
zfs load-key -a # enter password mount -t zfs rpool_$poolUUID/ROOT/default /sysroot
ArchZFS project solved this with a sh script, available here.