docs/alpine-server-setup/provisioning.md: set mountpoint to legacy, implement zlevis and updated format

This commit is contained in:
Luc Bijl 2024-12-24 12:38:29 +01:00
parent de46df1384
commit 35ee8a3320

View file

@ -1,6 +1,8 @@
# Provisioning
After flashing the Alpine Linux extended ISO, partition the disks. For this action internet is required since `zfs` and `sgdisk` are not included on the extended ISO, therefore it needs to be obtained from the repository.
Flash the Alpine Linux extended ISO and make sure the secureboot keys are reset and TPM is enabled in the BIOS of the host.
After booting the Alpine Linux extended ISO, partition the disks. For this action internet is required since `zfs`, `sgdisk` and various other necessary packages are not included on the extended ISO, therefore they need to be obtained from the alpine package repository.
To set it up `setup-interfaces` and `setup-apkrepos` will be used.
@ -9,10 +11,12 @@ To set it up `setup-interfaces` and `setup-apkrepos` will be used.
# setup-apkrepos -c1
```
A few packages will have to be installed first:
> To use Wi-Fi simply run `setup-interfaces -r` and select `wlan0` or similar.
A few packages will have to be installed first,
```
# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm
# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm tpm2-tools zlevis
```
and load the ZFS kernel module
@ -21,7 +25,7 @@ and load the ZFS kernel module
# modprobe zfs
```
Define the disks you want to use for this install
Define the disks you want to use for this install,
```
# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"
@ -31,7 +35,7 @@ with `<id-disk-n>` for $n \in \mathbb{N}$ the `id` of the disk.
> According to [openzfs-FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html) using `/dev/disk/by-id/` is the best practice for small pools. For larger pools, using serial Attached SCSI (SAS) and the like, see [vdev_id](https://openzfs.github.io/openzfs-docs/man/master/5/vdev_id.conf.5.html) for proper configuration.
Wipe the existing disk partitions
Wipe the existing disk partitions:
```
# for disk in $disks; do
@ -41,7 +45,7 @@ Wipe the existing disk partitions
> done
```
Create on each disk an `EFI system` partition (ESP) and a `Linux filesystem` partition
Create on each disk an `EFI system` partition (ESP) and a `Linux filesystem` partition:
```
# for disk in $disks; do
@ -50,13 +54,13 @@ Create on each disk an `EFI system` partition (ESP) and a `Linux filesystem` par
> done
```
Create device nodes
Create device nodes:
```
# mdev -s
```
Define the EFI partitions
Define the EFI partitions:
```
# export efiparts=""
@ -66,7 +70,7 @@ Define the EFI partitions
> done
```
Create a `mdraid` array on the EFI partitions
Create a `mdraid` array on the EFI partitions:
```
# modprobe raid1
@ -74,7 +78,7 @@ Create a `mdraid` array on the EFI partitions
# mdadm --assemble --scan
```
Format the array with a FAT32 filesystem
Format the array with a FAT32 filesystem:
```
# mkfs.fat -F 32 /dev/md/esp
@ -92,15 +96,15 @@ Define the pool partitions
> done
```
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file `/tmp/crypt-key.txt` with:
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file `/tmp/tank.key` with:
```
# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/tank.key && cat /tmp/tank.key
```
> Later on in the guide `clevis` will be used for automatic decryption, so this key only has to be entered a few times. However, if any changes are made to the bios or secureboot then this key will be needed again, so make sure to write it down.
> Later on in the guide `zlevis` will be used for automatic decryption, so this key only has to be entered a few times. However, if any changes are made to the bios or secureboot then this key will be needed again, so make sure to save it.
Create the system pool
Create the system pool:
```
# zpool create -f \
@ -111,25 +115,33 @@ Create the system pool
-O dnodesize=auto \
-O encryption=on \
-O keyformat=passphrase \
-O keylocation=file:///tmp/tank.key \
-O keylocation=prompt \
-m none \
tank raidz1 $poolparts
```
> Additionally, the `spare` option can be used to indicate spare disks. If more redundancy is preferred than `raidz2` and `raidz3` are possible [alternatives](https://openzfs.github.io/openzfs-docs/man/master/7/zpoolconcepts.7.html) for `raidz1`. If a single disk is used the `raidz` option can be left aside. For further information see [zpool-create](https://openzfs.github.io/openzfs-docs/man/master/8/zpool-create.8.html).
Then create the system datasets
Then create the system datasets:
```
# zfs create -o mountpoint=none tank/root
# zfs create -o canmount=noauto -o mountpoint=/ -o atime=off -o quota=24g tank/root/alpine
# zfs create -o mountpoint=legacy -o quota=24g tank/root/alpine
# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> tank/home
# zfs create -o mountpoint=/var -o exec=off -o setuid=off -o devices=off -o quota=16g tank/var
# zfs create -o mountpoint=/var -o atime=off -o exec=off -o setuid=off -o devices=off -o quota=16g tank/var
```
> Setting the `<home-quota>` depends on the total size of the pool, generally try to reserve some empty space in the pool.
Finally, export the zpool
Write the encryption key to TPM and store the jwe in tpm:jwe:
```
# zfs set tpm:jwe=$(zlevis-encrypt '{}' < /tmp/tank.key) tank
```
> To check if it worked, perform `zfs list -Ho tpm:jwe tank | zlevis-decrypt`.
Finally, export the zpool:
```
# zpool export tank