Root on ZFS with native encryption: Difference between revisions

From Alpine Linux
(update)
Line 1: Line 1:
= Useful links =
*[https://openzfs.github.io/openzfs-docs/Getting%20Started/ OpenZFS Getting Started]
*[https://g.nu8.org/posts/bieaz/setup/alpine/guide/ Encrypted ZFS with boot environment support]
= Objectives =
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.


Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.
Except EFI system partition and boot pool <code>/boot</code>, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.


To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.
To do an unencrypted setup, simply omit <code>-O keylocation -O keyformat</code> when creating root pool.


= Notes =
= Notes =
== Swap on ZFS will cause dead lock ==
You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.


Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. BusyBox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.


== Resume from ZFS will corrupt the pool ==
'''Existing data on target disk(s) will be destroyed.'''
ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS '''WILL''' corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]
== Encrypted boot pool ==
GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.


To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.
= Preparation =


Since there isn't any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.
== Setup live environment ==
 
== DO NOT set bootfs property! ==
Do not set {{ic|bootfs}} on any pool!
 
It will override {{ic|1=root=ZFS=rpool/ROOT/dataset}} kernel parameter and render boot environment menu in GRUB '''INVALID'''.
 
As GRUB support of ZFS is read-only, you will need to boot into live environment to unset this property if `bootfs` dataset is broken.
 
Boot environment menu is currently only available for GRUB. More info see [https://gitlab.com/m_zhou/bieaz bieaz boot environment manager readme].
 
= Pre-installation =
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.
 
'''Existing data on target disk(s) will be destroyed.'''


Download the '''extended''' release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.
Download the '''''extended''''' release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.


Write it to a USB and boot from it.
Run the following command to setup the live environment, use default <code>none</code> option when asked about disks.


== Setup live environment ==
<pre>setup-alpine</pre>
Run the following command to setup the live environment, select default option {{ic|1=disk=none}} at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].
Settings given here will be copied to the target system later by <code>setup-disk</code>.
setup-alpine
The settings given here will be copied to the target system later by {{ic|setup-disk}}.


== Install system utilities ==
== Install system utilities ==
apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs
Here we must install eudev to have persistent block device names. '''Do not use''' /dev/sda for ZFS pools.


Now run the following command to populate persistent device names in live system:
Install and setup<code>eudev</code> (a port of systemd <code>udev</code> by gentoo) to get block device names.
setup-udev


<pre>apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs
setup-udev</pre>
= Variables =
= Variables =
In this step, we will set some variables to make our installation process easier.
In this step, we will set some variables to make our installation process easier.
DISK=/dev/disk/by-id/ata-HXY_120G_YS
 
Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.
<pre>DISK=/dev/disk/by-id/ata-HXY_120G_YS</pre>
Use unique disk path instead of <code>/dev/sda</code> to ensure the correct partition can be found by ZFS.


Other variables
Other variables
TARGET_USERNAME='your username'
 
ENCRYPTION_PWD='your root pool encryption password, 8 characters min'
<pre>TARGET_USERNAME='your username'
TARGET_USERPWD='user account password'
ENCRYPTION_PWD='your root pool encryption password, 8 characters min'
TARGET_USERPWD='user account password'</pre>
Create a mountpoint
Create a mountpoint
MOUNTPOINT=`mktemp -d`
 
<pre>MOUNTPOINT=`mktemp -d`</pre>
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)


<pre>poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)</pre>
= Partitioning =
= Partitioning =
For a single disk, UEFI installation, we need to create at lease 3 partitions:
 
* EFI system partition
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.
* Boot pool partition
* Root pool partition
Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.


Clear the partition table on the target disk and create EFI, boot and root pool parititions:
Clear the partition table on the target disk and create EFI, boot and root pool parititions:
sgdisk --zap-all $DISK
 
sgdisk -n1:0:+512M -t1:EF00 $DISK
<pre>sgdisk --zap-all $DISK
sgdisk -n2:0:+2G $DISK        # boot pool
sgdisk -n1:0:+512M -t1:EF00 $DISK
sgdisk -n3:0:0 $DISK          # root pool
sgdisk -n2:0:+2G $DISK        # boot pool
sgdisk -n3:0:0 $DISK          # root pool</pre>
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.


== Optional: Swap partition ==
== Optional: Swap partition ==
[[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)
 
<code>Swap</code> support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)


If you want to use swap, reserve some space at the end of disk when creating root pool:
If you want to use swap, reserve some space at the end of disk when creating root pool:
sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
sgdisk -n4:0:0 $DISK          # swap partition


= Boot and root pool creation =
<pre>sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.
sgdisk -n4:0:0 $DISK          # swap partition</pre>
= Create boot and root pool =
 
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no <code>feature@</code> is supplied.


Here we explicitly enable those GRUB can support.
Here we explicitly enable those GRUB can support.
zpool create \
 
    -o ashift=12 -d \
<pre>zpool create \
    -o feature@async_destroy=enabled \
  -o ashift=12 -d \
    -o feature@bookmarks=enabled \
  -o feature@async_destroy=enabled \
    -o feature@embedded_data=enabled \
  -o feature@bookmarks=enabled \
    -o feature@empty_bpobj=enabled \
  -o feature@embedded_data=enabled \
    -o feature@enabled_txg=enabled \
  -o feature@empty_bpobj=enabled \
    -o feature@extensible_dataset=enabled \
  -o feature@enabled_txg=enabled \
    -o feature@filesystem_limits=enabled \
  -o feature@extensible_dataset=enabled \
    -o feature@hole_birth=enabled \
  -o feature@filesystem_limits=enabled \
    -o feature@large_blocks=enabled \
  -o feature@hole_birth=enabled \
    -o feature@lz4_compress=enabled \
  -o feature@large_blocks=enabled \
    -o feature@spacemap_histogram=enabled \
  -o feature@lz4_compress=enabled \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
  -o feature@spacemap_histogram=enabled \
    -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
  -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O mountpoint=/boot -R $MOUNTPOINT \
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
    bpool_$poolUUID $DISK-part2
  -O mountpoint=/boot -R $MOUNTPOINT \
Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.
  bpool_$poolUUID $DISK-part2</pre>
Nothing is stored directly under bpool and rpool, hence <code>canmount=off</code>. The respective <code>mountpoint</code> properties are more symbolic than practical.


For root pool all available features are enabled by default
For root pool all available features are enabled by default
echo $ENCRYPTION_PWD | zpool create \
    -o ashift=12 \
    -O encryption=aes-256-gcm \
    -O keylocation=prompt -O keyformat=passphrase \
    -O acltype=posixacl -O canmount=off -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on \
    -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
    rpool_$poolUUID $DISK-part3


== Notes for multi-disk ==
<pre>echo $ENCRYPTION_PWD | zpool create \
  -o ashift=12 \
  -O encryption=aes-256-gcm \
  -O keylocation=prompt -O keyformat=passphrase \
  -O acltype=posixacl -O canmount=off -O compression=lz4 \
  -O dnodesize=auto -O normalization=formD -O relatime=on \
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
  rpool_$poolUUID $DISK-part3</pre>
== For multi-disk ==
 
For mirror:
For mirror:
zpool create \
 
    ... \
<pre>zpool create \
    bpool_$poolUUID mirror \
  ... \
    /dev/disk/by-id/target_disk1-part2 \
  bpool_$poolUUID mirror \
    /dev/disk/by-id/target_disk2-part2
  /dev/disk/by-id/target_disk1-part2 \
zpool create \
  /dev/disk/by-id/target_disk2-part2
    ... \
zpool create \
    rpool_$poolUUID mirror \
  ... \
    /dev/disk/by-id/target_disk1-part3 \
  rpool_$poolUUID mirror \
    /dev/disk/by-id/target_disk2-part3
  /dev/disk/by-id/target_disk1-part3 \
  /dev/disk/by-id/target_disk2-part3</pre>
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.


= Dataset creation =
= Create system datasets =
This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.
 
{{Text art|<nowiki>
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
 
<pre>zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
Line 149: Line 130:
zfs mount rpool_$poolUUID/ROOT/default
zfs mount rpool_$poolUUID/ROOT/default
zfs mount bpool_$poolUUID/BOOT/default
zfs mount bpool_$poolUUID/BOOT/default
# ash, default with busybox, does not support array
# this is word splitting
d='usr var var/lib'
d='usr var var/lib'
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
Line 157: Line 140:
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME</pre>
</nowiki>}}
Depending on your application, separate datasets need to be created for folders inside <code>/var/lib</code>(not itself!)
Depending on your application, separate datasets need to be created for folders inside {{ic|/var/lib}}(not itself!)
 
Here we create several folders for persistent (shared) data, like we just did for <code>/home</code>.


Here we create several folders for persistent (shared) data, like we just did for {{ic|/home}}.
<pre>d='libvirt lxc docker'
d='libvirt lxc docker'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done</pre>
for i in d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done
<code>lxc</code> is for Linux container, <code>libvirt</code> is for storing virtual machine images, etc.
{{ic|lxc}} is for Linux container, {{ic|libvirt}} is for storing virtual machine images, etc.


= Format and mount EFI partition =
= Format and mount EFI partition =
mkfs.vfat -n EFI $DISK-part1
mkdir $MOUNTPOINT/boot/efi
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi
For multi-disk setup, a cron job needs to be configured to sync contents. It should be similar to [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Multi-ESP this article].


= Install Alpine Linux to target disk =
Here we use <code>/boot/efi</code> as the mountpoint, which is default for GRUB.
== Preparations ==
 
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.
<pre>mkfs.vfat -n EFI $DISK-part1
export ZPOOL_VDEV_NAME_PATH=YES
mkdir $MOUNTPOINT/boot/efi
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system</pre>
sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk
= System installation =
 
== Preparation ==
 
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.
 
<pre>export ZPOOL_VDEV_NAME_PATH=1</pre>
<code>setup-disk</code> refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.


== Run setup-disk ==
<pre>sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk</pre>
BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT
== setup-disk ==
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.


= Chroot into new system =
Run <code>setup-disk</code> to install system to target disk.
m='dev proc sys'
 
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done
<pre>BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT</pre>
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh
Note that grub-probe will still fail despite <code>ZPOOL_VDEV_NAME_PATH=YES</code> variable set above. We will deal with this later inside chroot.
 
== Chroot ==
 
<pre>m='dev proc sys'
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh</pre>
=== Finish GRUB installation ===


= Finish GRUB installation =
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.


Apply GRUB ZFS fix:
Apply fix:
echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
 
<pre>echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile</pre>
Reload
Reload
source /etc/profile
Apply fixes in [[#GRUB fixes]].
== GRUB fixes ==
1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from BusyBox.
See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:
GRUB_DEVICE="`${grub_probe} --target=device /`"
# will fail with `grub-probe: error: unknown filesystem.`
GRUB_FS="`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2> /dev/null || echo unknown`"
# will also fail. The final fall back is
if [ x"$GRUB_FS" = xunknown ]; then
    GRUB_FS="$(stat -f -c %T / || echo unknown)"
fi
# `stat` from coreutils will return `zfs`, the correct answer
# `stat` from BusyBox  will return `UNKNOWN`, cause `10_linux` script to fail
Therefore we need to install {{ic|coreutils}}.
apk add coreutils
=== Missing root pool ===
2. GRUB will stuff an empty result if it does not support root pool.
[https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html This patch] will warn about failed detection and allow customized detection method.


Before the patch is merged, I recommend to replace the following in {{ic|/etc/grub.d/10_linux}}
<pre>source /etc/profile</pre>
sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|"  /etc/grub.d/10_linux
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ====
 
<pre>apk add coreutils</pre>
==== Missing root pool ====
 
Before [https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html this patch] is merged, use the following workaround:
 
<pre>sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|"  /etc/grub.d/10_linux</pre>
This replaces GRUB rpool name detection.
 
==== Generate grub.cfg ====


== Generate grub.cfg ==
After applying fixes, finally run
After applying fixes, finally run
grub-mkconfig -o /boot/grub/grub.cfg


= Importing pools on boot =
<pre>grub-mkconfig -o /boot/grub/grub.cfg</pre>
{{ic|zpool.cache}} will be added to initramfs and zpool command will import pools contained in this cache.
=== Importing pools on boot ===
 
<code>zpool.cache</code> will be added to initramfs and zpool command will import pools contained in this cache.


System will fail to boot without this.
System will fail to boot without this.
zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID


= Initramfs fixes =
<pre>zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID
== Fix zfs decrypt ==
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID</pre>
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76/diffs this merge request].
=== Initramfs ===
== Enable persistent device names ==
 
Apply fixes in [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77/diffs this merge request].
<code>mkinitfs</code> included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure <code>mkinitfs</code> version is the following
 
<pre>foolive:/# apk info mkinitfs
mkinitfs-3.4.5-r3 description:</pre>
Then download [[patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch]], install <code>patch</code> and patch it.
 
<pre>foolive:~# wget https://g.nu8.org/path-to-patch
foolive:~# apk add patch
foolive:~# cd / # must apply patch at root
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch
patching file etc/mkinitfs/features.d/eudev.files
patching file etc/mkinitfs/features.d/zfs.files
patching file usr/share/mkinitfs/initramfs-init</pre>
==== Add eudev hook and rebuild ====
 
Add <code>eudev</code> to <code>/etc/mkinitfs/mkinitfs.conf</code>.


With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.
<pre>echo 'features="ata base eudev ide scsi usb virtio nvme zfs"' > /etc/mkinitfs/mkinitfs.conf
sed -i 's|zfs|zfs eudev|' /etc/mkinitfs/mkinitfs.conf
# order of features is important! this order is tested</pre>
Rebuild initramfs with
Rebuild initramfs with
mkinitfs $(ls -1 /lib/modules/)


= Mount datasets at boot =
<pre>mkinitfs $(ls -1 /lib/modules/)</pre>
rc-update add zfs-mount sysinit
=== Mount datasets at boot ===
rc-update add zfs-zed sysinit # zfs monitoring
 
Mounting {{ic|/boot}} dataset with fstab need {{ic|1=mountpoint=legacy}}:
<pre>rc-update add zfs-mount sysinit</pre>
umount /boot/efi
Mounting <code>/boot</code> dataset with fstab need <code>mountpoint=legacy</code>:
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default
 
mount /boot
<pre>umount /boot/efi
mount /boot/efi
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default
mount /boot
mount /boot/efi</pre>
=== Add user ===


= Add normal user account =
<pre>adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME
adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd</pre>
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd
Root account is accessed via <code>su</code> command with root password.
Root account is accessed via {{ic|su}} command with root password.


Optionally install {{ic|sudo}} to disable root password and use user's own password instead.
=== Boot environment manager ===


= Boot environment manager =
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration.


It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon.


= Optional: Desktop Environment =
=== Optional: Enable encrypted swap partition ===
See [[#Wayland-based_lightweight_desktop]].
 
Install <code>cryptsetup</code>


= Optional: Enable encrypted swap partition =
<pre>apk add cryptsetup</pre>
Install {{ic|cryptsetup}}
Edit the <code>/etc/mkinitfs/mkinitfs.conf</code> file and append the <code>cryptsetup</code> module to the front of zfs. Add relevant lines in <code>fstab</code> and <code>crypttab</code>. Replace <code>$DISK</code> with actual disk.
apk add cryptsetup
 
Edit the <code>/etc/mkinitfs/mkinitfs.conf</code> file and append the <code>cryptsetup</code> module to the <code>features</code> parameter:
<pre>echo swap   $DISK-part4 /dev/urandom   swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab
features="ata base ide scsi usb virtio ext4 lvm <u>cryptsetup</u> zfs eudev"
echo /dev/mapper/swap  none     swap   defaults   0   0 >> /etc/fstab</pre>
Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.
Rebuild initramfs with <code>mkinitfs</code>.
echo swap $DISK-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab
echo /dev/mapper/swap  none swap defaults 0 0 >> /etc/fstab
Rebuild initramfs with {{ic|mkinitfs}}.


= Finish installation =
= Finish installation =
Take a snapshot for the clean installation for future use and export all pools.
Take a snapshot for the clean installation for future use and export all pools.
exit
 
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
<pre>exit
zfs snapshot -r bpool_$poolUUID/BOOT/default@install
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
zfs snapshot -r bpool_$poolUUID/BOOT/default@install</pre>
Pools must be exported before reboot, or they will fail to be imported on boot.
Pools must be exported before reboot, or they will fail to be imported on boot.
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
  xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID


<pre>mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID</pre>
= Reboot =
= Reboot =


reboot
<pre>reboot</pre>
= Disk space stat =
= Recovery in Live environment =
== Barebone ==
Without optional swap or cryptsetup:
*bpool used 25.2M
*rpool used 491M
*efi used 416K
== Wayland-based lightweight desktop ==
This setup is based on Sway Window Manager and Qt apps.


Encrypted swap
Boot Live environment (extended release) and repeat [[#preparation|Preparation]]
apk add cryptsetup
Sway Window Manager and basic utilities
apk add sway swayidle swaylock grim i3status
Terminal
apk add alacritty
Sound
apk add alsa-utils
Utilities
apk add vim mutt isync lynx git p7zip proxychains-ng
Qt-based desktop environment, with dark theme, fdo keyring, file manager and PDF viewer
apk add qt5-qtwayland kvantum keepassxc pcmanfm zathura-pdf-poppler
Play videos with hardware accelerated decoding
apk add mpv youtube-dl libva-intel-driver
Firefox
apk add firefox-esr
Add MTP (connect to Android phones) and samba support to file manager
apk add gvfs-smb gvfs-mtp
Add dark GTK theme (Adwaita-dark), HiDPI mouse cursor for Sway, GTK icons
apk add gnome-themes-extra
Stat
*rpool used 1.11G
*bpool used 26.6M


= Recovery in Live environment =
Boot Live environment (extended release) and install packages:
setup-alpine      # basic settings: keyboard layout, timezone ...
apk-add zfs eudev # zfs-utils and persistent device name support
setup-udev        # populate persistent names
modprobe zfs      # load kernel module
Create a mount point and store encryption password in a variable:
Create a mount point and store encryption password in a variable:
MOUNTPOINT=`mktemp -d`
 
ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'
<pre>MOUNTPOINT=`mktemp -d`
ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'</pre>
Find the unique UUID of your pool with
Find the unique UUID of your pool with
zpool import
 
Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.
<pre>zpool import</pre>
poolUUID=abc123
Import rpool without mounting datasets: <code>-N</code> for not mounting all datasets; <code>-R</code> for alternate root.
zpool import -N -R $MOUNTPOINT rpool_$poolUUID
 
<pre>poolUUID=abc123
zpool import -N -R $MOUNTPOINT rpool_$poolUUID</pre>
Load encryption key
Load encryption key
echo $ENCRYPTION_PWD | zfs load-key -a
 
As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use
<pre>echo $ENCRYPTION_PWD | zfs load-key -a</pre>
zfs list rpool_$poolUUID/ROOT
As <code>canmount=noauto</code> is set for <code>/</code> dataset, we have to mount it manually. To find the dataset, use
Mount {{ic|/}} dataset
 
zfs mount rpool_$UUID/ROOT/''$dataset''
<pre>zfs list rpool_$poolUUID/ROOT</pre>
Mount <code>/</code> dataset
 
<pre>zfs mount rpool_$UUID/ROOT/$dataset</pre>
Mount other datasets
Mount other datasets
zfs mount -a
 
<pre>zfs mount -a</pre>
Import bpool
Import bpool
zpool import -N -R $MOUNTPOINT bpool_$UUID
 
Find and mount the {{ic|/boot}} dataset, same as above.
<pre>zpool import -N -R $MOUNTPOINT bpool_$UUID</pre>
zfs list bpool_$UUID/BOOT
Find and mount the <code>/boot</code> dataset, same as above.
mount -t zfs bpool_$UUID/BOOT/''$dataset'' $MOUNTPOINT/boot # legacy mountpoint
 
<pre>zfs list bpool_$UUID/BOOT
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint</pre>
Chroot
Chroot
mount --rbind /dev  $MOUNTPOINT/dev
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /bin/sh


After chroot, mount {{ic|/efi}}
<pre>mount --rbind /dev  $MOUNTPOINT/dev
mount /boot/efi
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /bin/sh</pre>
After chroot, mount <code>/efi</code>
 
<pre>mount /boot/efi</pre>
After fixing the system, don't forget to umount and export the pools:
After fixing the system, don't forget to umount and export the pools:
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
 
  xargs -i{} umount -lf {}
<pre>mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
zpool export bpool_$poolUUID
xargs -i{} umount -lf {}
zpool export rpool_$poolUUID
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID</pre>

Revision as of 23:59, 6 January 2021

This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.

Except EFI system partition and boot pool /boot, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.

To do an unencrypted setup, simply omit -O keylocation -O keyformat when creating root pool.

Notes

UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.

Existing data on target disk(s) will be destroyed.

Preparation

Setup live environment

Download the extended release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.

Run the following command to setup the live environment, use default none option when asked about disks.

setup-alpine

Settings given here will be copied to the target system later by setup-disk.

Install system utilities

Install and setupeudev (a port of systemd udev by gentoo) to get block device names.

apk update
apk add eudev sgdisk grub-efi zfs
modprobe zfs
setup-udev

Variables

In this step, we will set some variables to make our installation process easier.

DISK=/dev/disk/by-id/ata-HXY_120G_YS

Use unique disk path instead of /dev/sda to ensure the correct partition can be found by ZFS.

Other variables

TARGET_USERNAME='your username'
ENCRYPTION_PWD='your root pool encryption password, 8 characters min'
TARGET_USERPWD='user account password'

Create a mountpoint

MOUNTPOINT=`mktemp -d`

Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.

poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)

Partitioning

For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.

Clear the partition table on the target disk and create EFI, boot and root pool parititions:

sgdisk --zap-all $DISK
sgdisk -n1:0:+512M -t1:EF00 $DISK
sgdisk -n2:0:+2G $DISK        # boot pool
sgdisk -n3:0:0 $DISK          # root pool

If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.

Optional: Swap partition

Swap support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)

If you want to use swap, reserve some space at the end of disk when creating root pool:

sgdisk -n3:0:-8G $DISK        # root pool, reserve 8GB for swap at the end of the disk
sgdisk -n4:0:0 $DISK          # swap partition

Create boot and root pool

As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no feature@ is supplied.

Here we explicitly enable those GRUB can support.

zpool create \
  -o ashift=12 -d \
  -o feature@async_destroy=enabled \
  -o feature@bookmarks=enabled \
  -o feature@embedded_data=enabled \
  -o feature@empty_bpobj=enabled \
  -o feature@enabled_txg=enabled \
  -o feature@extensible_dataset=enabled \
  -o feature@filesystem_limits=enabled \
  -o feature@hole_birth=enabled \
  -o feature@large_blocks=enabled \
  -o feature@lz4_compress=enabled \
  -o feature@spacemap_histogram=enabled \
  -O acltype=posixacl -O canmount=off -O compression=lz4 \
  -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
  -O mountpoint=/boot -R $MOUNTPOINT \
  bpool_$poolUUID $DISK-part2

Nothing is stored directly under bpool and rpool, hence canmount=off. The respective mountpoint properties are more symbolic than practical.

For root pool all available features are enabled by default

echo $ENCRYPTION_PWD | zpool create \
  -o ashift=12 \
  -O encryption=aes-256-gcm \
  -O keylocation=prompt -O keyformat=passphrase \
  -O acltype=posixacl -O canmount=off -O compression=lz4 \
  -O dnodesize=auto -O normalization=formD -O relatime=on \
  -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
  rpool_$poolUUID $DISK-part3

For multi-disk

For mirror:

zpool create \
  ... \
  bpool_$poolUUID mirror \
  /dev/disk/by-id/target_disk1-part2 \
  /dev/disk/by-id/target_disk2-part2
zpool create \
  ... \
  rpool_$poolUUID mirror \
  /dev/disk/by-id/target_disk1-part3 \
  /dev/disk/by-id/target_disk2-part3

For RAID-Z, replace mirror with raidz, raidz2 or raidz3.

Create system datasets

This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.

zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default
zfs mount rpool_$poolUUID/ROOT/default
zfs mount bpool_$poolUUID/BOOT/default
# ash, default with busybox, does not support array
# this is word splitting
d='usr var var/lib'
for i in $d; do zfs create  -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
d='srv usr/local'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done
d='log spool tmp'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME

Depending on your application, separate datasets need to be created for folders inside /var/lib(not itself!)

Here we create several folders for persistent (shared) data, like we just did for /home.

d='libvirt lxc docker'
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done

lxc is for Linux container, libvirt is for storing virtual machine images, etc.

Format and mount EFI partition

Here we use /boot/efi as the mountpoint, which is default for GRUB.

mkfs.vfat -n EFI $DISK-part1
mkdir $MOUNTPOINT/boot/efi
mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system

System installation

Preparation

GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.

export ZPOOL_VDEV_NAME_PATH=1

setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.

sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk

setup-disk

Run setup-disk to install system to target disk.

BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT

Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.

Chroot

m='dev proc sys'
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh

Finish GRUB installation

As GRUB installation failed half-way in #Run setup-disk, we will finish it here.

Apply fix:

echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile

Reload

source /etc/profile

GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat

apk add coreutils

Missing root pool

Before this patch is merged, use the following workaround:

sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|"  /etc/grub.d/10_linux

This replaces GRUB rpool name detection.

Generate grub.cfg

After applying fixes, finally run

grub-mkconfig -o /boot/grub/grub.cfg

Importing pools on boot

zpool.cache will be added to initramfs and zpool command will import pools contained in this cache.

System will fail to boot without this.

zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID
zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID

Initramfs

mkinitfs included in stable Alpine Linux has bugs, before 1 and 2 is merged, we need to patch it manually. #### Patch Ensure mkinitfs version is the following

foolive:/# apk info mkinitfs
mkinitfs-3.4.5-r3 description:

Then download eudev-zfs-mkinitfs-3.4.5.patch, install patch and patch it.

foolive:~# wget https://g.nu8.org/path-to-patch
foolive:~# apk add patch
foolive:~# cd / # must apply patch at root
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch 
patching file etc/mkinitfs/features.d/eudev.files
patching file etc/mkinitfs/features.d/zfs.files
patching file usr/share/mkinitfs/initramfs-init

Add eudev hook and rebuild

Add eudev to /etc/mkinitfs/mkinitfs.conf.

echo 'features="ata base eudev ide scsi usb virtio nvme zfs"' > /etc/mkinitfs/mkinitfs.conf
# order of features is important! this order is tested

Rebuild initramfs with

mkinitfs $(ls -1 /lib/modules/)

Mount datasets at boot

rc-update add zfs-mount sysinit

Mounting /boot dataset with fstab need mountpoint=legacy:

umount /boot/efi
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default
mount /boot
mount /boot/efi

Add user

adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd

Root account is accessed via su command with root password.

Boot environment manager

bieaz is a simple boot environment management shell script with GRUB integration.

It has been submitted to aports, see this merge request. Should be available in edge/test soon.

Optional: Enable encrypted swap partition

Install cryptsetup

apk add cryptsetup

Edit the /etc/mkinitfs/mkinitfs.conf file and append the cryptsetup module to the front of zfs. Add relevant lines in fstab and crypttab. Replace $DISK with actual disk.

echo swap   $DISK-part4 /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab
echo /dev/mapper/swap   none     swap    defaults    0   0 >> /etc/fstab

Rebuild initramfs with mkinitfs.

Finish installation

Take a snapshot for the clean installation for future use and export all pools.

exit
zfs snapshot -r rpool_$poolUUID/ROOT/default@install
zfs snapshot -r bpool_$poolUUID/BOOT/default@install

Pools must be exported before reboot, or they will fail to be imported on boot.

mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
 xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID

Reboot

reboot

Recovery in Live environment

Boot Live environment (extended release) and repeat Preparation

Create a mount point and store encryption password in a variable:

MOUNTPOINT=`mktemp -d`
ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'

Find the unique UUID of your pool with

zpool import

Import rpool without mounting datasets: -N for not mounting all datasets; -R for alternate root.

poolUUID=abc123
zpool import -N -R $MOUNTPOINT rpool_$poolUUID

Load encryption key

echo $ENCRYPTION_PWD | zfs load-key -a

As canmount=noauto is set for / dataset, we have to mount it manually. To find the dataset, use

zfs list rpool_$poolUUID/ROOT

Mount / dataset

zfs mount rpool_$UUID/ROOT/$dataset

Mount other datasets

zfs mount -a

Import bpool

zpool import -N -R $MOUNTPOINT bpool_$UUID

Find and mount the /boot dataset, same as above.

zfs list bpool_$UUID/BOOT
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint

Chroot

mount --rbind /dev  $MOUNTPOINT/dev
mount --rbind /proc $MOUNTPOINT/proc
mount --rbind /sys  $MOUNTPOINT/sys
chroot $MOUNTPOINT /bin/sh

After chroot, mount /efi

mount /boot/efi

After fixing the system, don't forget to umount and export the pools:

mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
xargs -i{} umount -lf {}
zpool export bpool_$poolUUID
zpool export rpool_$poolUUID