Root on ZFS with native encryption: Difference between revisions
(→Missing root pool: update) |
(update) |
||
Line 1: | Line 1: | ||
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported. | This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported. | ||
Except EFI system partition and boot pool | Except EFI system partition and boot pool <code>/boot</code>, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt. | ||
To do an unencrypted setup, simply omit | To do an unencrypted setup, simply omit <code>-O keylocation -O keyformat</code> when creating root pool. | ||
= Notes = | = Notes = | ||
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation. | |||
'''Existing data on target disk(s) will be destroyed.''' | |||
= Preparation = | |||
== Setup live environment == | |||
== | |||
= | |||
Download the '''extended''' release from https://www.alpinelinux.org/downloads/, | Download the '''''extended''''' release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live. | ||
Run the following command to setup the live environment, use default <code>none</code> option when asked about disks. | |||
<pre>setup-alpine</pre> | |||
Settings given here will be copied to the target system later by <code>setup-disk</code>. | |||
== Install system utilities == | == Install system utilities == | ||
Install and setup<code>eudev</code> (a port of systemd <code>udev</code> by gentoo) to get block device names. | |||
<pre>apk update | |||
apk add eudev sgdisk grub-efi zfs | |||
modprobe zfs | |||
setup-udev</pre> | |||
= Variables = | = Variables = | ||
In this step, we will set some variables to make our installation process easier. | In this step, we will set some variables to make our installation process easier. | ||
Use unique disk path instead of | <pre>DISK=/dev/disk/by-id/ata-HXY_120G_YS</pre> | ||
Use unique disk path instead of <code>/dev/sda</code> to ensure the correct partition can be found by ZFS. | |||
Other variables | Other variables | ||
<pre>TARGET_USERNAME='your username' | |||
ENCRYPTION_PWD='your root pool encryption password, 8 characters min' | |||
TARGET_USERPWD='user account password'</pre> | |||
Create a mountpoint | Create a mountpoint | ||
<pre>MOUNTPOINT=`mktemp -d`</pre> | |||
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system. | Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system. | ||
<pre>poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)</pre> | |||
= Partitioning = | = Partitioning = | ||
For a single disk, UEFI installation, we need to create at lease 3 partitions: | |||
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS. | |||
Since | |||
Clear the partition table on the target disk and create EFI, boot and root pool parititions: | Clear the partition table on the target disk and create EFI, boot and root pool parititions: | ||
<pre>sgdisk --zap-all $DISK | |||
sgdisk -n1:0:+512M -t1:EF00 $DISK | |||
sgdisk -n2:0:+2G $DISK # boot pool | |||
sgdisk -n3:0:0 $DISK # root pool</pre> | |||
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above. | If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above. | ||
== Optional: Swap partition == | == Optional: Swap partition == | ||
<code>Swap</code> support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.) | |||
If you want to use swap, reserve some space at the end of disk when creating root pool: | If you want to use swap, reserve some space at the end of disk when creating root pool: | ||
= | <pre>sgdisk -n3:0:-8G $DISK # root pool, reserve 8GB for swap at the end of the disk | ||
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no | sgdisk -n4:0:0 $DISK # swap partition</pre> | ||
= Create boot and root pool = | |||
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no <code>feature@</code> is supplied. | |||
Here we explicitly enable those GRUB can support. | Here we explicitly enable those GRUB can support. | ||
<pre>zpool create \ | |||
-o ashift=12 -d \ | |||
-o feature@async_destroy=enabled \ | |||
-o feature@bookmarks=enabled \ | |||
-o feature@embedded_data=enabled \ | |||
-o feature@empty_bpobj=enabled \ | |||
-o feature@enabled_txg=enabled \ | |||
-o feature@extensible_dataset=enabled \ | |||
-o feature@filesystem_limits=enabled \ | |||
-o feature@hole_birth=enabled \ | |||
-o feature@large_blocks=enabled \ | |||
-o feature@lz4_compress=enabled \ | |||
-o feature@spacemap_histogram=enabled \ | |||
-O acltype=posixacl -O canmount=off -O compression=lz4 \ | |||
-O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ | |||
-O mountpoint=/boot -R $MOUNTPOINT \ | |||
Nothing is stored directly under bpool and rpool, hence | bpool_$poolUUID $DISK-part2</pre> | ||
Nothing is stored directly under bpool and rpool, hence <code>canmount=off</code>. The respective <code>mountpoint</code> properties are more symbolic than practical. | |||
For root pool all available features are enabled by default | For root pool all available features are enabled by default | ||
== | <pre>echo $ENCRYPTION_PWD | zpool create \ | ||
-o ashift=12 \ | |||
-O encryption=aes-256-gcm \ | |||
-O keylocation=prompt -O keyformat=passphrase \ | |||
-O acltype=posixacl -O canmount=off -O compression=lz4 \ | |||
-O dnodesize=auto -O normalization=formD -O relatime=on \ | |||
-O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \ | |||
rpool_$poolUUID $DISK-part3</pre> | |||
== For multi-disk == | |||
For mirror: | For mirror: | ||
<pre>zpool create \ | |||
... \ | |||
bpool_$poolUUID mirror \ | |||
/dev/disk/by-id/target_disk1-part2 \ | |||
/dev/disk/by-id/target_disk2-part2 | |||
zpool create \ | |||
... \ | |||
rpool_$poolUUID mirror \ | |||
/dev/disk/by-id/target_disk1-part3 \ | |||
/dev/disk/by-id/target_disk2-part3</pre> | |||
For RAID-Z, replace mirror with raidz, raidz2 or raidz3. | For RAID-Z, replace mirror with raidz, raidz2 or raidz3. | ||
= | = Create system datasets = | ||
This layout is intended to separate root file system from persistent files. See | |||
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description. | |||
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME | |||
<pre>zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME | |||
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT | zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT | ||
zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT | zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT | ||
Line 149: | Line 130: | ||
zfs mount rpool_$poolUUID/ROOT/default | zfs mount rpool_$poolUUID/ROOT/default | ||
zfs mount bpool_$poolUUID/BOOT/default | zfs mount bpool_$poolUUID/BOOT/default | ||
# ash, default with busybox, does not support array | |||
# this is word splitting | |||
d='usr var var/lib' | d='usr var var/lib' | ||
for i in $d; do zfs create -o canmount=off rpool_$poolUUID/ROOT/default/$i; done | for i in $d; do zfs create -o canmount=off rpool_$poolUUID/ROOT/default/$i; done | ||
Line 157: | Line 140: | ||
zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default | zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default | ||
zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root | zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root | ||
zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME | zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME</pre> | ||
</ | Depending on your application, separate datasets need to be created for folders inside <code>/var/lib</code>(not itself!) | ||
Depending on your application, separate datasets need to be created for folders inside | |||
Here we create several folders for persistent (shared) data, like we just did for <code>/home</code>. | |||
<pre>d='libvirt lxc docker' | |||
for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done</pre> | |||
<code>lxc</code> is for Linux container, <code>libvirt</code> is for storing virtual machine images, etc. | |||
= Format and mount EFI partition = | = Format and mount EFI partition = | ||
= | Here we use <code>/boot/efi</code> as the mountpoint, which is default for GRUB. | ||
== | |||
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH= | <pre>mkfs.vfat -n EFI $DISK-part1 | ||
mkdir $MOUNTPOINT/boot/efi | |||
setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array. | mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system</pre> | ||
= System installation = | |||
== Preparation == | |||
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1. | |||
<pre>export ZPOOL_VDEV_NAME_PATH=1</pre> | |||
<code>setup-disk</code> refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array. | |||
== | <pre>sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk</pre> | ||
== setup-disk == | |||
= Chroot | Run <code>setup-disk</code> to install system to target disk. | ||
<pre>BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT</pre> | |||
Note that grub-probe will still fail despite <code>ZPOOL_VDEV_NAME_PATH=YES</code> variable set above. We will deal with this later inside chroot. | |||
== Chroot == | |||
<pre>m='dev proc sys' | |||
for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done | |||
chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh</pre> | |||
=== Finish GRUB installation === | |||
As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here. | As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here. | ||
Apply | Apply fix: | ||
<pre>echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile</pre> | |||
Reload | Reload | ||
Before | <pre>source /etc/profile</pre> | ||
==== GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat ==== | |||
<pre>apk add coreutils</pre> | |||
==== Missing root pool ==== | |||
Before [https://lists.gnu.org/archive/html/grub-devel/2021-01/msg00003.html this patch] is merged, use the following workaround: | |||
<pre>sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux</pre> | |||
This replaces GRUB rpool name detection. | |||
==== Generate grub.cfg ==== | |||
After applying fixes, finally run | After applying fixes, finally run | ||
= Importing pools on boot = | <pre>grub-mkconfig -o /boot/grub/grub.cfg</pre> | ||
=== Importing pools on boot === | |||
<code>zpool.cache</code> will be added to initramfs and zpool command will import pools contained in this cache. | |||
System will fail to boot without this. | System will fail to boot without this. | ||
= | <pre>zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID | ||
== | zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID</pre> | ||
=== Initramfs === | |||
<code>mkinitfs</code> included in stable Alpine Linux has bugs, before [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 1] and [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76 2] is merged, we need to patch it manually. #### Patch Ensure <code>mkinitfs</code> version is the following | |||
<pre>foolive:/# apk info mkinitfs | |||
mkinitfs-3.4.5-r3 description:</pre> | |||
Then download [[patch/eudev-zfs-mkinitfs-3.4.5.patch|eudev-zfs-mkinitfs-3.4.5.patch]], install <code>patch</code> and patch it. | |||
<pre>foolive:~# wget https://g.nu8.org/path-to-patch | |||
foolive:~# apk add patch | |||
foolive:~# cd / # must apply patch at root | |||
foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch | |||
patching file etc/mkinitfs/features.d/eudev.files | |||
patching file etc/mkinitfs/features.d/zfs.files | |||
patching file usr/share/mkinitfs/initramfs-init</pre> | |||
==== Add eudev hook and rebuild ==== | |||
Add <code>eudev</code> to <code>/etc/mkinitfs/mkinitfs.conf</code>. | |||
<pre>echo 'features="ata base eudev ide scsi usb virtio nvme zfs"' > /etc/mkinitfs/mkinitfs.conf | |||
# order of features is important! this order is tested</pre> | |||
Rebuild initramfs with | Rebuild initramfs with | ||
= Mount datasets at boot = | <pre>mkinitfs $(ls -1 /lib/modules/)</pre> | ||
=== Mount datasets at boot === | |||
Mounting | <pre>rc-update add zfs-mount sysinit</pre> | ||
Mounting <code>/boot</code> dataset with fstab need <code>mountpoint=legacy</code>: | |||
<pre>umount /boot/efi | |||
zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default | |||
mount /boot | |||
mount /boot/efi</pre> | |||
=== Add user === | |||
<pre>adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME | |||
chown -R $TARGET_USERNAME /home/$TARGET_USERNAME | |||
echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd</pre> | |||
Root account is accessed via <code>su</code> command with root password. | |||
Root account is accessed via | |||
=== Boot environment manager === | |||
[https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration. | [https://gitlab.com/m_zhou/bieaz bieaz] is a simple boot environment management shell script with GRUB integration. | ||
It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon. | It has been submitted to aports, see [https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/16406 this merge request]. Should be available in edge/test soon. | ||
= Optional: | === Optional: Enable encrypted swap partition === | ||
Install <code>cryptsetup</code> | |||
<pre>apk add cryptsetup</pre> | |||
Edit the <code>/etc/mkinitfs/mkinitfs.conf</code> file and append the <code>cryptsetup</code> module to the front of zfs. Add relevant lines in <code>fstab</code> and <code>crypttab</code>. Replace <code>$DISK</code> with actual disk. | |||
Edit the <code>/etc/mkinitfs/mkinitfs.conf</code> file and append the <code>cryptsetup</code> module to the <code> | <pre>echo swap $DISK-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab | ||
echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab</pre> | |||
Rebuild initramfs with <code>mkinitfs</code>. | |||
Rebuild initramfs with | |||
= Finish installation = | = Finish installation = | ||
Take a snapshot for the clean installation for future use and export all pools. | Take a snapshot for the clean installation for future use and export all pools. | ||
<pre>exit | |||
zfs snapshot -r rpool_$poolUUID/ROOT/default@install | |||
zfs snapshot -r bpool_$poolUUID/BOOT/default@install</pre> | |||
Pools must be exported before reboot, or they will fail to be imported on boot. | Pools must be exported before reboot, or they will fail to be imported on boot. | ||
<pre>mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \ | |||
xargs -i{} umount -lf {} | |||
zpool export bpool_$poolUUID | |||
zpool export rpool_$poolUUID</pre> | |||
= Reboot = | = Reboot = | ||
<pre>reboot</pre> | |||
= | = Recovery in Live environment = | ||
Boot Live environment (extended release) and repeat [[#preparation|Preparation]] | |||
Create a mount point and store encryption password in a variable: | Create a mount point and store encryption password in a variable: | ||
<pre>MOUNTPOINT=`mktemp -d` | |||
ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'</pre> | |||
Find the unique UUID of your pool with | Find the unique UUID of your pool with | ||
Import rpool without mounting datasets: | <pre>zpool import</pre> | ||
Import rpool without mounting datasets: <code>-N</code> for not mounting all datasets; <code>-R</code> for alternate root. | |||
<pre>poolUUID=abc123 | |||
zpool import -N -R $MOUNTPOINT rpool_$poolUUID</pre> | |||
Load encryption key | Load encryption key | ||
As | <pre>echo $ENCRYPTION_PWD | zfs load-key -a</pre> | ||
As <code>canmount=noauto</code> is set for <code>/</code> dataset, we have to mount it manually. To find the dataset, use | |||
Mount | |||
<pre>zfs list rpool_$poolUUID/ROOT</pre> | |||
Mount <code>/</code> dataset | |||
<pre>zfs mount rpool_$UUID/ROOT/$dataset</pre> | |||
Mount other datasets | Mount other datasets | ||
<pre>zfs mount -a</pre> | |||
Import bpool | Import bpool | ||
Find and mount the | <pre>zpool import -N -R $MOUNTPOINT bpool_$UUID</pre> | ||
Find and mount the <code>/boot</code> dataset, same as above. | |||
<pre>zfs list bpool_$UUID/BOOT | |||
mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint</pre> | |||
Chroot | Chroot | ||
After chroot, mount | <pre>mount --rbind /dev $MOUNTPOINT/dev | ||
mount --rbind /proc $MOUNTPOINT/proc | |||
mount --rbind /sys $MOUNTPOINT/sys | |||
chroot $MOUNTPOINT /bin/sh</pre> | |||
After chroot, mount <code>/efi</code> | |||
<pre>mount /boot/efi</pre> | |||
After fixing the system, don't forget to umount and export the pools: | After fixing the system, don't forget to umount and export the pools: | ||
<pre>mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \ | |||
xargs -i{} umount -lf {} | |||
zpool export bpool_$poolUUID | |||
zpool export rpool_$poolUUID</pre> |
Revision as of 23:59, 6 January 2021
This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported.
Except EFI system partition and boot pool /boot
, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.
To do an unencrypted setup, simply omit -O keylocation -O keyformat
when creating root pool.
Notes
UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.
Existing data on target disk(s) will be destroyed.
Preparation
Setup live environment
Download the extended release from https://www.alpinelinux.org/downloads/, as only this version is shipped with ZFS kernel module. Alpine Linux can not load kernel module in live.
Run the following command to setup the live environment, use default none
option when asked about disks.
setup-alpine
Settings given here will be copied to the target system later by setup-disk
.
Install system utilities
Install and setupeudev
(a port of systemd udev
by gentoo) to get block device names.
apk update apk add eudev sgdisk grub-efi zfs modprobe zfs setup-udev
Variables
In this step, we will set some variables to make our installation process easier.
DISK=/dev/disk/by-id/ata-HXY_120G_YS
Use unique disk path instead of /dev/sda
to ensure the correct partition can be found by ZFS.
Other variables
TARGET_USERNAME='your username' ENCRYPTION_PWD='your root pool encryption password, 8 characters min' TARGET_USERPWD='user account password'
Create a mountpoint
MOUNTPOINT=`mktemp -d`
Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)
Partitioning
For a single disk, UEFI installation, we need to create at lease 3 partitions: - EFI system partition - Boot pool partition - Root pool partition Since GRUB only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.
Clear the partition table on the target disk and create EFI, boot and root pool parititions:
sgdisk --zap-all $DISK sgdisk -n1:0:+512M -t1:EF00 $DISK sgdisk -n2:0:+2G $DISK # boot pool sgdisk -n3:0:0 $DISK # root pool
If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.
Optional: Swap partition
Swap
support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)
If you want to use swap, reserve some space at the end of disk when creating root pool:
sgdisk -n3:0:-8G $DISK # root pool, reserve 8GB for swap at the end of the disk sgdisk -n4:0:0 $DISK # swap partition
Create boot and root pool
As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no feature@
is supplied.
Here we explicitly enable those GRUB can support.
zpool create \ -o ashift=12 -d \ -o feature@async_destroy=enabled \ -o feature@bookmarks=enabled \ -o feature@embedded_data=enabled \ -o feature@empty_bpobj=enabled \ -o feature@enabled_txg=enabled \ -o feature@extensible_dataset=enabled \ -o feature@filesystem_limits=enabled \ -o feature@hole_birth=enabled \ -o feature@large_blocks=enabled \ -o feature@lz4_compress=enabled \ -o feature@spacemap_histogram=enabled \ -O acltype=posixacl -O canmount=off -O compression=lz4 \ -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ -O mountpoint=/boot -R $MOUNTPOINT \ bpool_$poolUUID $DISK-part2
Nothing is stored directly under bpool and rpool, hence canmount=off
. The respective mountpoint
properties are more symbolic than practical.
For root pool all available features are enabled by default
echo $ENCRYPTION_PWD | zpool create \ -o ashift=12 \ -O encryption=aes-256-gcm \ -O keylocation=prompt -O keyformat=passphrase \ -O acltype=posixacl -O canmount=off -O compression=lz4 \ -O dnodesize=auto -O normalization=formD -O relatime=on \ -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \ rpool_$poolUUID $DISK-part3
For multi-disk
For mirror:
zpool create \ ... \ bpool_$poolUUID mirror \ /dev/disk/by-id/target_disk1-part2 \ /dev/disk/by-id/target_disk2-part2 zpool create \ ... \ rpool_$poolUUID mirror \ /dev/disk/by-id/target_disk1-part3 \ /dev/disk/by-id/target_disk2-part3
For RAID-Z, replace mirror with raidz, raidz2 or raidz3.
Create system datasets
This layout is intended to separate root file system from persistent files. See https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout for a description.
zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default zfs mount rpool_$poolUUID/ROOT/default zfs mount bpool_$poolUUID/BOOT/default # ash, default with busybox, does not support array # this is word splitting d='usr var var/lib' for i in $d; do zfs create -o canmount=off rpool_$poolUUID/ROOT/default/$i; done d='srv usr/local' for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done d='log spool tmp' for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME
Depending on your application, separate datasets need to be created for folders inside /var/lib
(not itself!)
Here we create several folders for persistent (shared) data, like we just did for /home
.
d='libvirt lxc docker' for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/lib/$i; done
lxc
is for Linux container, libvirt
is for storing virtual machine images, etc.
Format and mount EFI partition
Here we use /boot/efi
as the mountpoint, which is default for GRUB.
mkfs.vfat -n EFI $DISK-part1 mkdir $MOUNTPOINT/boot/efi mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi # need to specify file system
System installation
Preparation
GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=1.
export ZPOOL_VDEV_NAME_PATH=1
setup-disk
refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.
sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk
setup-disk
Run setup-disk
to install system to target disk.
BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT
Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES
variable set above. We will deal with this later inside chroot.
Chroot
m='dev proc sys' for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh
Finish GRUB installation
As GRUB installation failed half-way in #Run setup-disk, we will finish it here.
Apply fix:
echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile
Reload
source /etc/profile
GRUB fails to detect the ZFS filesystem of /boot with BusyBox stat
apk add coreutils
Missing root pool
Before this patch is merged, use the following workaround:
sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux
This replaces GRUB rpool name detection.
Generate grub.cfg
After applying fixes, finally run
grub-mkconfig -o /boot/grub/grub.cfg
Importing pools on boot
zpool.cache
will be added to initramfs and zpool command will import pools contained in this cache.
System will fail to boot without this.
zpool set cachefile=/etc/zfs/zpool.cache rpool_$poolUUID zpool set cachefile=/etc/zfs/zpool.cache bpool_$poolUUID
Initramfs
mkinitfs
included in stable Alpine Linux has bugs, before 1 and 2 is merged, we need to patch it manually. #### Patch Ensure mkinitfs
version is the following
foolive:/# apk info mkinitfs mkinitfs-3.4.5-r3 description:
Then download eudev-zfs-mkinitfs-3.4.5.patch, install patch
and patch it.
foolive:~# wget https://g.nu8.org/path-to-patch foolive:~# apk add patch foolive:~# cd / # must apply patch at root foolive:/# patch -Np1 -i /root/eudev-zfs-mkinitfs-3.4.5.patch patching file etc/mkinitfs/features.d/eudev.files patching file etc/mkinitfs/features.d/zfs.files patching file usr/share/mkinitfs/initramfs-init
Add eudev hook and rebuild
Add eudev
to /etc/mkinitfs/mkinitfs.conf
.
echo 'features="ata base eudev ide scsi usb virtio nvme zfs"' > /etc/mkinitfs/mkinitfs.conf # order of features is important! this order is tested
Rebuild initramfs with
mkinitfs $(ls -1 /lib/modules/)
Mount datasets at boot
rc-update add zfs-mount sysinit
Mounting /boot
dataset with fstab need mountpoint=legacy
:
umount /boot/efi zfs set mountpoint=legacy bpool_$poolUUID/BOOT/default mount /boot mount /boot/efi
Add user
adduser -s /bin/sh -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME chown -R $TARGET_USERNAME /home/$TARGET_USERNAME echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd
Root account is accessed via su
command with root password.
Boot environment manager
bieaz is a simple boot environment management shell script with GRUB integration.
It has been submitted to aports, see this merge request. Should be available in edge/test soon.
Optional: Enable encrypted swap partition
Install cryptsetup
apk add cryptsetup
Edit the /etc/mkinitfs/mkinitfs.conf
file and append the cryptsetup
module to the front of zfs. Add relevant lines in fstab
and crypttab
. Replace $DISK
with actual disk.
echo swap $DISK-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
Rebuild initramfs with mkinitfs
.
Finish installation
Take a snapshot for the clean installation for future use and export all pools.
exit zfs snapshot -r rpool_$poolUUID/ROOT/default@install zfs snapshot -r bpool_$poolUUID/BOOT/default@install
Pools must be exported before reboot, or they will fail to be imported on boot.
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \ xargs -i{} umount -lf {} zpool export bpool_$poolUUID zpool export rpool_$poolUUID
Reboot
reboot
Recovery in Live environment
Boot Live environment (extended release) and repeat Preparation
Create a mount point and store encryption password in a variable:
MOUNTPOINT=`mktemp -d` ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'
Find the unique UUID of your pool with
zpool import
Import rpool without mounting datasets: -N
for not mounting all datasets; -R
for alternate root.
poolUUID=abc123 zpool import -N -R $MOUNTPOINT rpool_$poolUUID
Load encryption key
echo $ENCRYPTION_PWD | zfs load-key -a
As canmount=noauto
is set for /
dataset, we have to mount it manually. To find the dataset, use
zfs list rpool_$poolUUID/ROOT
Mount /
dataset
zfs mount rpool_$UUID/ROOT/$dataset
Mount other datasets
zfs mount -a
Import bpool
zpool import -N -R $MOUNTPOINT bpool_$UUID
Find and mount the /boot
dataset, same as above.
zfs list bpool_$UUID/BOOT mount -t zfs bpool_$UUID/BOOT/$dataset $MOUNTPOINT/boot # legacy mountpoint
Chroot
mount --rbind /dev $MOUNTPOINT/dev mount --rbind /proc $MOUNTPOINT/proc mount --rbind /sys $MOUNTPOINT/sys chroot $MOUNTPOINT /bin/sh
After chroot, mount /efi
mount /boot/efi
After fixing the system, don't forget to umount and export the pools:
mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \ xargs -i{} umount -lf {} zpool export bpool_$poolUUID zpool export rpool_$poolUUID