152 lines
4.4 KiB
Markdown
152 lines
4.4 KiB
Markdown
# Provisioning
|
|
|
|
Flash the Alpine Linux extended ISO and make sure the secureboot keys are reset and TPM is enabled in the BIOS of the host.
|
|
|
|
After booting the Alpine Linux extended ISO, partition the disks. For this action internet is required since `zfs`, `sgdisk` and various other necessary packages are not included on the extended ISO, therefore they need to be obtained from the alpine package repository.
|
|
|
|
To set it up `setup-interfaces` and `setup-apkrepos` will be used.
|
|
|
|
```
|
|
# setup-interfaces -ar
|
|
# setup-apkrepos -c1
|
|
```
|
|
|
|
> To use Wi-Fi simply run `setup-interfaces -r` and select `wlan0` or similar.
|
|
|
|
A few packages will have to be installed first,
|
|
|
|
```
|
|
# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm zlevis
|
|
```
|
|
|
|
> The `zlevis` package is as of this moment not yet in the alpine package repository. Try to get it into the `/usr/bin` directory via a different method and add its dependencies `tpm2-tools` and `jose`.
|
|
|
|
and load the ZFS kernel module
|
|
|
|
```
|
|
# modprobe zfs
|
|
```
|
|
|
|
Define the disks you want to use for this install,
|
|
|
|
```
|
|
# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"
|
|
```
|
|
|
|
with `<id-disk-n>` for $n \in \mathbb{N}$ the `id` of the disk.
|
|
|
|
> According to [openzfs-FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html) using `/dev/disk/by-id/` is the best practice for small pools. For larger pools, using serial Attached SCSI (SAS) and the like, see [vdev_id](https://openzfs.github.io/openzfs-docs/man/master/5/vdev_id.conf.5.html) for proper configuration.
|
|
|
|
Wipe the existing disk partitions:
|
|
|
|
```
|
|
# for disk in $disks; do
|
|
> zpool labelclear -f $disk
|
|
> wipefs -a $disk
|
|
> sgdisk --zap-all $disk
|
|
> done
|
|
```
|
|
|
|
Create on each disk an `EFI system` partition (ESP) and a `Linux filesystem` partition:
|
|
|
|
```
|
|
# for disk in $disks; do
|
|
> sgdisk -n 1:1m:+512m -t 1:ef00 $disk
|
|
> sgdisk -n 2:0:-10m -t 2:8300 $disk
|
|
> done
|
|
```
|
|
|
|
Create device nodes:
|
|
|
|
```
|
|
# mdev -s
|
|
```
|
|
|
|
Define the EFI partitions:
|
|
|
|
```
|
|
# export efiparts=""
|
|
# for disk in $disks; do
|
|
> efipart=${disk}-part-1
|
|
> efiparts="$efiparts $efipart"
|
|
> done
|
|
```
|
|
|
|
Create a `mdraid` array on the EFI partitions:
|
|
|
|
```
|
|
# modprobe raid1
|
|
# mdadm --create --level 1 --metadata 1.0 --raid-devices <n> /dev/md/esp $efiparts
|
|
# mdadm --assemble --scan
|
|
```
|
|
|
|
Format the array with a FAT32 filesystem:
|
|
|
|
```
|
|
# mkfs.fat -F 32 /dev/md/esp
|
|
```
|
|
|
|
## ZFS pool creation
|
|
|
|
Define the pool partitions
|
|
|
|
```
|
|
# export poolparts=""
|
|
# for disk in $disks; do
|
|
> poolpart=${disk}-part-2
|
|
> poolparts="$poolparts $poolpart"
|
|
> done
|
|
```
|
|
|
|
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file `/tmp/rpool.key` with:
|
|
|
|
```
|
|
# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/rpool.key && cat /tmp/rpool.key
|
|
```
|
|
|
|
> While `zlevis` will be used for automatic decryption, if any changes are made to the bios or secureboot then this key will be needed, so make sure to save it.
|
|
|
|
Create the system pool:
|
|
|
|
```
|
|
# zpool create -f \
|
|
-o ashift=12 \
|
|
-O compression=lz4 \
|
|
-O acltype=posix \
|
|
-O xattr=sa \
|
|
-O dnodesize=auto \
|
|
-O encryption=on \
|
|
-O keyformat=passphrase \
|
|
-O keylocation=prompt \
|
|
-m none \
|
|
rpool raidz1 $poolparts
|
|
```
|
|
|
|
> Additionally, the `spare` option can be used to indicate spare disks. If more redundancy is preferred than `raidz2` and `raidz3` are possible [alternatives](https://openzfs.github.io/openzfs-docs/man/master/7/zpoolconcepts.7.html) for `raidz1`. If a single disk is used the `raidz` option can be left aside. For further information see [zpool-create](https://openzfs.github.io/openzfs-docs/man/master/8/zpool-create.8.html).
|
|
|
|
Then create the system datasets:
|
|
|
|
```
|
|
# zfs create -o mountpoint=none rpool/root
|
|
# zfs create -o mountpoint=legacy -o quota=24g rpool/root/alpine
|
|
# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> rpool/home
|
|
# zfs create -o mountpoint=/var -o atime=off -o exec=off -o setuid=off -o devices=off -o quota=16g rpool/var
|
|
```
|
|
|
|
> Setting the `<home-quota>` depends on the total size of the pool, generally try to reserve some empty space in the pool.
|
|
|
|
Write the encryption key to TPM with `zlevis`:
|
|
|
|
```
|
|
# zlevis encrypt rpool '{}' < /tmp/rpool.key
|
|
```
|
|
|
|
> We are using the default configuration settings for `zlevis encrypt` but a different configuration is possible by setting `'{}'` accordingly.
|
|
|
|
> To check if it worked, perform `zlevis decrypt rpool`.
|
|
|
|
Finally, export the zpool:
|
|
|
|
```
|
|
# zpool export rpool
|
|
```
|