4.2 KiB
Provisioning
Flash the Alpine Linux extended ISO and make sure the secureboot keys are reset and TPM is enabled in the BIOS of the host.
After booting the Alpine Linux extended ISO, partition the disks. For this action internet is required since zfs
, sgdisk
and various other necessary packages are not included on the extended ISO, therefore they need to be obtained from the alpine package repository.
To set it up setup-interfaces
and setup-apkrepos
will be used.
# setup-interfaces -ar
# setup-apkrepos -c1
To use Wi-Fi simply run
setup-interfaces -r
and selectwlan0
or similar.
A few packages will have to be installed first,
# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm tpm2-tools zlevis
and load the ZFS kernel module
# modprobe zfs
Define the disks you want to use for this install,
# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"
with <id-disk-n>
for n \in \mathbb{N}
the id
of the disk.
According to openzfs-FAQ using
/dev/disk/by-id/
is the best practice for small pools. For larger pools, using serial Attached SCSI (SAS) and the like, see vdev_id for proper configuration.
Wipe the existing disk partitions:
# for disk in $disks; do
> zpool labelclear -f $disk
> wipefs -a $disk
> sgdisk --zap-all $disk
> done
Create on each disk an EFI system
partition (ESP) and a Linux filesystem
partition:
# for disk in $disks; do
> sgdisk -n 1:1m:+512m -t 1:ef00 $disk
> sgdisk -n 2:0:-10m -t 2:8300 $disk
> done
Create device nodes:
# mdev -s
Define the EFI partitions:
# export efiparts=""
# for disk in $disks; do
> efipart=${disk}-part-1
> efiparts="$efiparts $efipart"
> done
Create a mdraid
array on the EFI partitions:
# modprobe raid1
# mdadm --create --level 1 --metadata 1.0 --raid-devices <n> /dev/md/esp $efiparts
# mdadm --assemble --scan
Format the array with a FAT32 filesystem:
# mkfs.fat -F 32 /dev/md/esp
ZFS pool creation
Define the pool partitions
# export poolparts=""
# for disk in $disks; do
> poolpart=${disk}-part-2
> poolparts="$poolparts $poolpart"
> done
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file /tmp/tank.key
with:
# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/tank.key && cat /tmp/tank.key
Later on in the guide
zlevis
will be used for automatic decryption, so this key only has to be entered a few times. However, if any changes are made to the bios or secureboot then this key will be needed again, so make sure to save it.
Create the system pool:
# zpool create -f \
-o ashift=12 \
-O compression=lz4 \
-O acltype=posix \
-O xattr=sa \
-O dnodesize=auto \
-O encryption=on \
-O keyformat=passphrase \
-O keylocation=prompt \
-m none \
tank raidz1 $poolparts
Additionally, the
spare
option can be used to indicate spare disks. If more redundancy is preferred thanraidz2
andraidz3
are possible alternatives forraidz1
. If a single disk is used theraidz
option can be left aside. For further information see zpool-create.
Then create the system datasets:
# zfs create -o mountpoint=none tank/root
# zfs create -o mountpoint=legacy -o quota=24g tank/root/alpine
# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> tank/home
# zfs create -o mountpoint=/var -o atime=off -o exec=off -o setuid=off -o devices=off -o quota=16g tank/var
Setting the
<home-quota>
depends on the total size of the pool, generally try to reserve some empty space in the pool.
Write the encryption key to TPM and store the jwe in tpm:jwe:
# zfs set tpm:jwe=$(zlevis-encrypt '{}' < /tmp/tank.key) tank
To check if it worked, perform
zfs list -Ho tpm:jwe tank | zlevis-decrypt
.
Finally, export the zpool:
# zpool export tank