|
|
(69 intermediate revisions by 5 users not shown) |
Line 1: |
Line 1: |
| = Objectives =
| | This is a guide for installing Alpine Linux with its root partition on an encrypted ZFS volume, using ZFS's own encryption capabilities. The system will be encrypted when powered off and will need to be unlocked by typing a passphrase at boot. To be able to boot the system, the `/boot` partition remains unencrypted. |
| This guide aims to setup encrypted Alpine Linux on ZFS with a layout compatible with boot environments. Mirror and RAID-Z supported. | |
|
| |
|
| Except EFI system partition and boot pool {{ic|/boot}}, everything is encrypted. Root pool is encrypted with ZFS native encryption and swap partition is encrypted with dm-crypt.
| | = OpenZFS Guide = |
|
| |
|
| To do an unencrypted setup, simply omit {{ic|-O keylocation -O keyformat}} when creating root pool.
| | A guide on OpenZFS website supports native encryption, UEFI boot and legacy boot, and multi-disk. See [https://openzfs.github.io/openzfs-docs/Getting%20Started/Alpine%20Linux/Root%20on%20ZFS.html here]. |
|
| |
|
| = Notes = | | = Downloading Alpine = |
| == Swap on ZFS will cause dead lock ==
| |
| You shouldn't use a ZVol as a swap device, as it can deadlock under memory pressure. See [https://github.com/openzfs/zfs/issues/7734] This guide will set up swap on a separate partition with plain dm-crypt.
| |
|
| |
|
| Resume from swap is not possible, because the key of swap partition can not be stored in the unencrypted boot pool. Busybox initramfs only supports unlocking exactly one LUKS container at boot, therefore boot pool and swap partition can not be both LUKS encrypted. A possible workaround is to import and mount boot pool after booting the system via systemd service.
| | Download the '''extended''' release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel modules at the time of writing (2022-02-12). |
|
| |
|
| == Resume from ZFS will corrupt the pool ==
| | Write it to a USB device and boot from it. |
| ZFS does not support freeze/thaw operations, which is required for resuming from hibernation, or suspend to disk. Attempt to resume from a swap on ZFS '''WILL''' corrupt the pool. See [https://github.com/openzfs/zfs/issues/260]
| |
| == Encrypted boot pool ==
| |
| GRUB supports booting from LUKS-1 encrypted containers. Therefore, it is possible to encrypt both boot pool and root pool to achieve full disk encryption.
| |
|
| |
|
| To do this, format boot pool partition as LUKS-1 container and supply the encryption password here. Use keyfile for root pool and embed the keyfile in initramfs.
| | = Initial Setup = |
|
| |
|
| Since there isn't any sensitive information in {{ic|/boot}} anyway (unless you want to use a persistent LUKS encrypted swap partition for resume from hibernation), encrypting boot pool provides no meaningful benefit and complicates the installation and recovery process.
| | Run the following to start the installation procedure: |
|
| |
|
| = Pre-installation =
| | {{cmd|setup-alpine}} |
| UEFI is required. Supports single disk & multi-disk (stripe, mirror, RAID-Z) installation.
| |
|
| |
|
| '''Existing data on target disk(s) will be destroyed.''' | | Answer all the questions, and hit {{Key|Ctrl}}+{{Key|C}} when prompted for which disk you'd like to use. |
|
| |
|
| Download the '''extended''' release from https://www.alpinelinux.org/downloads/, it's shipped with ZFS kernel module.
| | == Optional: SSH access == |
|
| |
|
| Write it to a USB and boot from it.
| | This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands from these instructions into a terminal window. |
|
| |
|
| == Setup live environment ==
| | Edit {{path|/etc/ssh/sshd_config}} and search for `Permit`. Change the value after `PermitRootLogin` to read `yes` |
| Run the following command to setup the live environment, select disk=none at the last step when asked for disk mode. See [[Installation#Questions_asked_by_setup-alpine]].
| |
| setup-alpine
| |
| The settings given here will be copied to the target system later by {{ic|setup-disk}}.
| |
|
| |
|
| == Install system utilities ==
| | save and exit to shell. Run {{codeline|service sshd restart}} |
| apk update
| |
| apk add eudev sgdisk grub-efi zfs
| |
| modprobe zfs
| |
| Here we must install eudev to have persistent block device names. '''Do not use''' /dev/sda for ZFS pools.
| |
| setup-udev
| |
|
| |
|
| = Variables =
| | Now you can ssh in as root. Do not forget to go back and comment this line out when you're done since it will be enabled on the resulting machine. You will be reminded again at the end of this doc. |
| In this step, we will set some variables to make our installation process easier.
| |
| DISK=/dev/disk/by-id/ata-HXY_120G_YS
| |
| Use unique disk path instead of {{ic|/dev/sda}} to ensure the correct partition can be found by ZFS.
| |
|
| |
|
| Other variables
| | = Add required packages = |
| TARGET_USERNAME='your username'
| |
| ENCRYPTION_PWD='your root pool encryption password, 8 characters min'
| |
| TARGET_USERPWD='user account password'
| |
| Create a mountpoint
| |
| MOUNTPOINT=`mktemp -d` | |
| Create a unique suffix for the ZFS pools: this will prevent name conflict when importing pools on another Root on ZFS system.
| |
| poolUUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)
| |
|
| |
|
| = Partitioning =
| | {{cmd|apk add {{pkg|zfs}} {{pkg|sfdisk}} {{pkg|e2fsprogs}} {{pkg|syslinux}}}} |
| For a single disk, UEFI installation, we need to create at lease 3 partitions:
| |
| * EFI system partition
| |
| * Boot pool partition
| |
| * Root pool partition
| |
| Since [[GRUB]] only partially support ZFS, many features needs to be disabled on the boot pool. By creating a separate root pool, we can then utilize the full potential of ZFS.
| |
|
| |
|
| Clear the partition table on the target disk and create EFI, boot and root pool parititions:
| | = Partition setup = |
| sgdisk --zap-all $DISK
| |
| sgdisk -n1:0:+512M -t1:EF00 $DISK
| |
| sgdisk -n2:0:+2G $DISK # boot pool
| |
| sgdisk -n3:0:0 $DISK # root pool
| |
| If you want to use a multi-disk setup, such as mirror or RAID-Z, partition every target disk with the same commands above.
| |
|
| |
|
| == Optional: Swap partition ==
| | We're assuming that {{path|/dev/sda}} is the target storage device here and in the rest of the document, but the name of the storage device you wish to install to may be different. To see a list of storage devices and determine the correct one, type {{codeline|sfdisk -l}}. |
| [[Swap]] support on ZFS is also problematic, therefore it is recommended to create a separate Swap partition if needed. This guide will cover the creation of a separate swap partition.(can not be used for hibernation since the encryption key is discarded when power off.)
| |
|
| |
|
| If you want to use swap, reserve some space at the end of disk when creating root pool:
| | {{cmd|echo -e "/dev/sda1: start{{=}}1M,size{{=}}100M,bootable\n/dev/sda2: start{{=}}101M" | sfdisk --quiet --label dos /dev/sda}} |
| sgdisk -n3:0:-8G $DISK # root pool, reserve 8GB for swap at the end of the disk
| |
| sgdisk -n4:0:0 $DISK # swap partition
| |
|
| |
|
| = Boot and root pool creation = | | == Create device nodes == |
| As mentioned above, ZFS features need to be selectively enabled for GRUB. All available features are enabled when no {{ic|feature@}} is supplied.
| |
|
| |
|
| Here we explicitly enable those GRUB can support.
| | {{cmd|mdev -s}} |
| zpool create \
| |
| -o ashift=12 -d \
| |
| -o feature@async_destroy=enabled \
| |
| -o feature@bookmarks=enabled \
| |
| -o feature@embedded_data=enabled \
| |
| -o feature@empty_bpobj=enabled \
| |
| -o feature@enabled_txg=enabled \
| |
| -o feature@extensible_dataset=enabled \
| |
| -o feature@filesystem_limits=enabled \
| |
| -o feature@hole_birth=enabled \
| |
| -o feature@large_blocks=enabled \
| |
| -o feature@lz4_compress=enabled \
| |
| -o feature@spacemap_histogram=enabled \
| |
| -O acltype=posixacl -O canmount=off -O compression=lz4 \
| |
| -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
| |
| -O mountpoint=/boot -R $MOUNTPOINT \
| |
| bpool_$poolUUID $DISK-part2
| |
| Nothing is stored directly under bpool and rpool, hence {{ic|1=canmount=off}}. The respective {{ic|mountpoint}} properties are more symbolic than practical.
| |
|
| |
|
| For root pool all available features are enabled by default
| | == Create the {{path|/boot}} filesystem == |
| echo $ENCRYPTION_PWD | zpool create \
| |
| -o ashift=12 \
| |
| -O encryption=aes-256-gcm \
| |
| -O keylocation=prompt -O keyformat=passphrase \
| |
| -O acltype=posixacl -O canmount=off -O compression=lz4 \
| |
| -O dnodesize=auto -O normalization=formD -O relatime=on \
| |
| -O xattr=sa -O mountpoint=/ -R $MOUNTPOINT \
| |
| rpool_$poolUUID $DISK-part3
| |
|
| |
|
| == Notes for multi-disk ==
| | {{cmd|mkfs.ext4 /dev/sda1}} |
| For mirror:
| |
| zpool create \
| |
| ... \
| |
| bpool_$poolUUID mirror \
| |
| /dev/disk/by-id/target_disk1-part2 \
| |
| /dev/disk/by-id/target_disk2-part2
| |
| zpool create \
| |
| ... \
| |
| rpool_$poolUUID mirror \
| |
| /dev/disk/by-id/target_disk1-part3 \
| |
| /dev/disk/by-id/target_disk2-part3
| |
| For RAID-Z, replace mirror with raidz, raidz2 or raidz3.
| |
|
| |
|
| = Dataset creation = | | = ZFS setup = |
| This layout is intended to separate root file system from persistent files. See [https://wiki.archlinux.org/index.php/User:M0p/Root_on_ZFS_Native_Encryption/Layout] for a description.
| | == Create the root zpool == |
| {{Text art|<nowiki>
| |
| zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/HOME
| |
| zfs create -o canmount=off -o mountpoint=none rpool_$poolUUID/ROOT
| |
| zfs create -o canmount=off -o mountpoint=none bpool_$poolUUID/BOOT
| |
| zfs create -o mountpoint=/ -o canmount=noauto rpool_$poolUUID/ROOT/default
| |
| zfs create -o mountpoint=/boot -o canmount=noauto bpool_$poolUUID/BOOT/default
| |
| zfs mount rpool_$poolUUID/ROOT/default
| |
| zfs mount bpool_$poolUUID/BOOT/default
| |
| d='usr var var/lib'
| |
| for i in $d; do zfs create -o canmount=off rpool_$poolUUID/ROOT/default/$i; done
| |
| d='srv usr/local'
| |
| for i in $d; do zfs create rpool_$poolUUID/ROOT/default/$i; done
| |
| d='log spool tmp'
| |
| for i in $d; do zfs create rpool_$poolUUID/ROOT/default/var/$i; done
| |
| zfs create -o mountpoint=/home rpool_$poolUUID/HOME/default
| |
| zfs create -o mountpoint=/root rpool_$poolUUID/HOME/default/root
| |
| zfs create rpool_$poolUUID/HOME/default/$TARGET_USERNAME
| |
| </nowiki>}}
| |
|
| |
|
| = Format and mount EFI partition = | | {{cmd|modprobe zfs |
| mkfs.vfat -n EFI $DISK-part1
| | zpool create -f -o ashift{{=}}12 \ |
| mkdir $MOUNTPOINT/boot/efi
| | -O acltype{{=}}posixacl -O canmount{{=}}off -O compression{{=}}lz4 \ |
| mount -t vfat $DISK-part1 $MOUNTPOINT/boot/efi
| | -O dnodesize{{=}}auto -O normalization{{=}}formD -O relatime{{=}}on -O xattr{{=}}sa \ |
| | -O encryption{{=}}aes-256-gcm -O keylocation{{=}}prompt -O keyformat{{=}}passphrase \ |
| | -O mountpoint{{=}}/ -R /mnt \ |
| | rpool /dev/sda2}} |
|
| |
|
| = Install Alpine Linux to target disk =
| | You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely [https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects the weakest link in this setup]. |
| == Preparations ==
| |
| GRUB will not find the correct path of root device without ZPOOL_VDEV_NAME_PATH=YES.
| |
| export ZPOOL_VDEV_NAME_PATH=YES
| |
| setup-disk refuse to run on ZFS by default, we need to add ZFS to the supported filesystem array.
| |
| sed -i 's|supported="ext|supported="zfs ext|g' /sbin/setup-disk
| |
|
| |
|
| == Run setup-disk == | | A few notes on the options supplied to zpool: |
| BOOTLOADER=grub USE_EFI=y setup-disk -v $MOUNTPOINT
| | <ul> |
| Note that grub-probe will still fail despite ZPOOL_VDEV_NAME_PATH=YES variable set above. We will deal with this later inside chroot.
| | <li>{{codeline|ashift{{=}}12}} is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors</li> |
| | <li>{{codeline|acltype{{=}}posixacl}} enables POSIX ACLs globally</li> |
| | <li>{{codeline|normalization{{=}}formD}} eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.</li> |
| | <li>{{codeline|xattr{{=}}sa}} vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don't specify this option.</li></ul> |
|
| |
|
| = Chroot into new system =
| | After completing this, confirm that the pool has been created: |
| m='dev proc sys'
| |
| for i in $m; do mount --rbind /$i $MOUNTPOINT/$i; done
| |
| chroot $MOUNTPOINT /usr/bin/env TARGET_USERPWD=$TARGET_USERPWD TARGET_USERNAME=$TARGET_USERNAME poolUUID=$poolUUID /bin/sh
| |
|
| |
|
| = Finish GRUB installation =
| | {{cmd|# zpool status}} |
| As GRUB installation failed half-way in [[#Run setup-disk]], we will finish it here.
| |
|
| |
|
| Apply GRUB ZFS fix:
| | Should return something like: |
| export ZPOOL_VDEV_NAME_PATH=YES
| |
| Apply fixes in WARNING.
| |
| == GRUB fixes ==
| |
| 1. GRUB will fail to detect the ZFS filesystem of {{ic|/boot}} with {{ic|stat}} from Busybox.
| |
|
| |
|
| See [https://git.savannah.gnu.org/cgit/grub.git/tree/util/grub-mkconfig.in source file of grub-mkconfig], the problem is:
| | pool: rpool |
| GRUB_DEVICE="`${grub_probe} --target=device /`"
| | state: ONLINE |
| # will fail with `grub-probe: error: unknown filesystem.`
| | scan: none requested |
| GRUB_FS="`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2> /dev/null || echo unknown`"
| | config: |
| # will also fail. The final fall back is
| | |
| if [ x"$GRUB_FS" = xunknown ]; then
| | NAME STATE READ WRITE CKSUM |
| GRUB_FS="$(stat -f -c %T / || echo unknown)" | | rpool ONLINE 0 0 0 |
| fi
| | sda2 ONLINE 0 0 0 |
| # `stat` from coreutils will return `zfs`, the correct answer
| | |
| # `stat` from busybox will return `UNKNOWN`, cause `10_linux` script to fail
| | errors: No known data errors |
| Therefore we need to install {{ic|coreutils}}.
| |
| apk add coreutils
| |
| === Missing root pool ===
| |
| 2. GRUB will stuff an empty result if it does not support root pool.
| |
|
| |
|
| GRUB is lagging behind development of OpenZFS, see [https://lists.gnu.org/archive/html/grub-devel/2020-12/msg00239.html].
| | == Create the required datasets and mount root == |
|
| |
|
| A temporary fix is to replace detection of rpool with the method given in patch.
| | {{cmd|zfs create -o mountpoint{{=}}none -o canmount{{=}}off rpool/ROOT |
| sed -i "s/rpool=.*/rpool=\`zdb -l \${GRUB_DEVICE} \| awk -F \\\' '\/ name\/ { print \$2 }'\`/" /etc/grub.d/10_linux
| | zfs create -o mountpoint{{=}}legacy rpool/ROOT/alpine |
| Need to be applied upon every GRUB update until the patch is merged.
| | mount -t zfs rpool/ROOT/alpine /mnt/}} |
|
| |
|
| This workaround uses {{ic|zdb}}, which does not have a stable output, according to manual page.
| | == Mount the {{path|/boot}} filesystem == |
| zdb -l ${GRUB_DEVICE} | awk -F \' '/ name/ { print $2 }'
| |
|
| |
|
| As the pool name is stored as disk label, it is possible to probe disk label and use that as root pool name,
| | {{cmd|mkdir /mnt/boot/ |
| | mount -t ext4 /dev/sda1 /mnt/boot/}} |
|
| |
|
| An alternative using udevadm from eudev (installed above) is:
| | == Enable ZFS' services == |
| eval "$(udevadm info -x --query=property --name=${GRUB_DEVICE})" && echo $ID_FS_LABEL
| |
| Will not work in chroot!
| |
|
| |
|
| An alternative using blkid from util-linux is:
| | {{cmd|rc-update add zfs-import sysinit |
| apk add util-linux
| | rc-update add zfs-mount sysinit}} |
| eval "$(blkid info -o export ${GRUB_DEVICE})" && echo $LABEL
| |
| Busybox version of blkid will return an empty result.
| |
|
| |
|
| Choose one to your liking and don't forget to apply this every GRUB update.
| | = Install Alpine Linux = |
|
| |
|
| == Generate grub.cfg == | | {{cmd|setup-disk /mnt |
| After applying fixes, finally run
| | dd if{{=}}/usr/share/syslinux/mbr.bin of{{=}}/dev/sda # write mbr so we can boot}} |
| grub-mkconfig -o /boot/grub/grub.cfg
| |
|
| |
|
| = Initramfs fixes = | | = Reboot and enjoy! = |
| == Fix zfs decrypt ==
| |
| See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/76].
| |
| == Enable persistent device names ==
| |
| Special modifications need to be made to populate {{ic|/dev/disk/by-*}} in initramfs.
| |
|
| |
|
| See [https://gitlab.alpinelinux.org/alpine/mkinitfs/-/merge_requests/77 this merge request].
| | 😉 |
|
| |
|
| With the changes in merge request applied, add {{ic|eudev}} to {{ic|/etc/mkinitfs/mkinitfs.conf}}.
| | '''NOTE:''' |
| | | If you went with the optional step, be sure to disable root login after you reboot. |
| Rebuild initramfs with
| |
| mkinitfs
| |
| | |
| = Mount datasets at boot =
| |
| rc-update add zfs-mount sysinit
| |
| rc-update add zfs-zed sysinit # zfs monitoring
| |
| = Add normal user account =
| |
| adduser -s /bin/sh -G wheel -G video -H -D -h /home/$TARGET_USERNAME $TARGET_USERNAME
| |
| chown -R $TARGET_USERNAME:$TARGET_USERNAME /home/$TARGET_USERNAME
| |
| echo "$TARGET_USERNAME:$TARGET_USERPWD" | chpasswd
| |
| | |
| = Optional: Enable encrypted swap partition =
| |
| Install {{ic|cryptsetup}}
| |
| apk add cryptsetup
| |
| Edit the <code>/mnt/etc/mkinitfs/mkinitfs.conf</code> file and append the <code>cryptsetup</code> module to the <code>features</code> parameter:
| |
| features="ata base ide scsi usb virtio ext4 lvm <u>cryptsetup</u> zfs"
| |
| Add relevant lines in {{ic|fstab}} and {{ic|crypttab}}. Replace {{ic|$DISK}} with actual disk.
| |
| echo swap $DISK-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256 >> /etc/crypttab
| |
| echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
| |
| Rebuild initramfs with {{ic|mkinitfs}}.
| |
| | |
| = Finish installation =
| |
| Take a snapshot for the clean installation for future use and export all pools.
| |
| exit
| |
| zfs snapshot -r rpool_$poolUUID/ROOT/default@install
| |
| zfs snapshot -r bpool_$poolUUID/BOOT/default@install
| |
| Pools must be exported before reboot, or they will fail to be imported on boot.
| |
| mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
| |
| xargs -i{} umount -lf {}
| |
| zpool export bpool_$poolUUID
| |
| zpool export rpool_$poolUUID
| |
| | |
| = Reboot =
| |
| | |
| reboot
| |
| = Disk space stat =
| |
| Without optional swap or cryptsetup:
| |
| *bpool used 25.2M
| |
| *rpool used 491M
| |
| *efi used 416K
| |
| | |
| = Recovery in Live environment =
| |
| After installing zfs packages, run the following command:
| |
| | |
| Create a mount point and store encryption password in a variable:
| |
| MOUNTPOINT=`mktemp -d`
| |
| ENCRYPTION_PWD='YOUR DISK ENCRYPTION PASSWORD, 8 MINIMUM'
| |
| Find the unique UUID of your pool with
| |
| zpool import
| |
| Import rpool without mounting datasets: {{ic|-N}} for not mounting all datasets; {{ic|-R}} for alternate root.
| |
| poolUUID=abc123
| |
| zpool import -N -R $MOUNTPOINT rpool_$poolUUID
| |
| Load encryption key
| |
| echo $ENCRYPTION_PWD | zfs load-key -a
| |
| As {{ic|1=canmount=noauto}} is set for {{ic|/}} dataset, we have to mount it manually. To find the dataset, use
| |
| zfs list rpool_$poolUUID/ROOT
| |
| Mount {{ic|/}} dataset
| |
| zfs mount rpool_$UUID/ROOT/''$dataset''
| |
| Mount other datasets
| |
| zfs mount -a
| |
| Import bpool
| |
| zpool import -N -R $MOUNTPOINT bpool_$UUID
| |
| Find and mount the {{ic|/boot}} dataset, same as above.
| |
| zfs list bpool_$UUID/BOOT
| |
| zfs mount bpool_$UUID/BOOT/''$dataset''
| |
| Chroot
| |
| mount --rbind /dev $MOUNTPOINT/dev
| |
| mount --rbind /proc $MOUNTPOINT/proc
| |
| mount --rbind /sys $MOUNTPOINT/sys
| |
| chroot $MOUNTPOINT /bin/sh
| |
| | |
| After chroot, mount {{ic|/efi}}
| |
| mount /boot/efi
| |
| After fixing the system, don't forget to umount and export the pools:
| |
| mount | grep -v zfs | tac | grep $MOUNTPOINT | awk '{print $3}' | \
| |
| xargs -i{} umount -lf {}
| |
| zpool export bpool_$poolUUID
| |
| zpool export rpool_$poolUUID
| |
This is a guide for installing Alpine Linux with its root partition on an encrypted ZFS volume, using ZFS's own encryption capabilities. The system will be encrypted when powered off and will need to be unlocked by typing a passphrase at boot. To be able to boot the system, the `/boot` partition remains unencrypted.
OpenZFS Guide
A guide on OpenZFS website supports native encryption, UEFI boot and legacy boot, and multi-disk. See here.
Downloading Alpine
Download the extended release from https://www.alpinelinux.org/downloads/ as only it contains the zfs kernel modules at the time of writing (2022-02-12).
Write it to a USB device and boot from it.
Initial Setup
Run the following to start the installation procedure:
setup-alpine
Answer all the questions, and hit Ctrl+C when prompted for which disk you'd like to use.
Optional: SSH access
This section is optional and it assumes internet connectivity. You may enable sshd so you can ssh into the box and copy and paste the rest of the commands from these instructions into a terminal window.
Edit /etc/ssh/sshd_config and search for `Permit`. Change the value after `PermitRootLogin` to read `yes`
save and exit to shell. Run service sshd restart
Now you can ssh in as root. Do not forget to go back and comment this line out when you're done since it will be enabled on the resulting machine. You will be reminded again at the end of this doc.
Add required packages
apk add zfs sfdisk e2fsprogs syslinux
Partition setup
We're assuming that /dev/sda is the target storage device here and in the rest of the document, but the name of the storage device you wish to install to may be different. To see a list of storage devices and determine the correct one, type sfdisk -l.
echo -e "/dev/sda1: start=1M,size=100M,bootable\n/dev/sda2: start=101M" | sfdisk --quiet --label dos /dev/sda
Create device nodes
mdev -s
Create the /boot filesystem
mkfs.ext4 /dev/sda1
ZFS setup
Create the root zpool
modprobe zfs
zpool create -f -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
-O mountpoint=/ -R /mnt \
rpool /dev/sda2
You will have to enter your passphrase at this point. Choose wisely, as your passphrase is most likely the weakest link in this setup.
A few notes on the options supplied to zpool:
- ashift=12 is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors
- acltype=posixacl enables POSIX ACLs globally
- normalization=formD eliminates some corner cases relating to UTF-8 filename normalization. It also enables `utf8only=on`, meaning that only files with valid UTF-8 filenames will be accepted.
- xattr=sa vastly improves the performance of extended attributes, but is Linux-only. If you care about using this pool on other OpenZFS implementation don't specify this option.
After completing this, confirm that the pool has been created:
# zpool status
Should return something like:
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda2 ONLINE 0 0 0
errors: No known data errors
Create the required datasets and mount root
zfs create -o mountpoint=none -o canmount=off rpool/ROOT
zfs create -o mountpoint=legacy rpool/ROOT/alpine
mount -t zfs rpool/ROOT/alpine /mnt/
Mount the /boot filesystem
mkdir /mnt/boot/
mount -t ext4 /dev/sda1 /mnt/boot/
Enable ZFS' services
rc-update add zfs-import sysinit
rc-update add zfs-mount sysinit
Install Alpine Linux
setup-disk /mnt
dd if=/usr/share/syslinux/mbr.bin of=/dev/sda # write mbr so we can boot
Reboot and enjoy!
😉
NOTE:
If you went with the optional step, be sure to disable root login after you reboot.