documentation/docs/alpine-server-setup/provisioning.md

136 lines
3.8 KiB
Markdown
Raw Normal View History

2024-08-10 21:54:34 +02:00
# Provisioning
After flashing the Alpine Linux extended ISO, partition the disks. For this action internet is required since `zfs` and `sgdisk` are not included on the extended ISO, therefore it needs to be obtained from the repository.
To set it up `setup-interfaces` and `setup-apkrepos` will be used.
```
# setup-interfaces -ar
# setup-apkrepos -c1
```
A few packages will have to be installed first:
```
# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm
```
and load the ZFS kernel module
```
# modprobe zfs
```
Define the disks you want to use for this install
```
# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"
```
with `<id-disk-n>` for $n \in \mathbb{N}$ the `id` of the disk.
> According to [openzfs-FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html) using `/dev/disk/by-id/` is the best practice for small pools. For larger pools, using serial Attached SCSI (SAS) and the like, see [vdev_id](https://openzfs.github.io/openzfs-docs/man/master/5/vdev_id.conf.5.html) for proper configuration.
Wipe the existing disk partitions
```
# for disk in $disks; do
> zpool labelclear -f $disk
> wipefs -a $disk
> sgdisk --zap-all $disk
> done
```
Create on each disk an `EFI system` partition (ESP) and a `Linux filesystem` partition
```
# for disk in $disks; do
> sgdisk -n 1:1m:+512m -t 1:ef00 $disk
> sgdisk -n 2:0:-10m -t 2:8300 $disk
> done
```
Create device nodes
```
# mdev -s
```
Define the EFI partitions
```
# export efiparts=""
# for disk in $disks; do
> efipart=${disk}-part-1
> efiparts="$efiparts $efipart"
> done
```
Create a `mdraid` array on the EFI partitions
```
# modprobe raid1
# mdadm --create --level 1 --metadata 1.0 --raid-devices <n> /dev/md/esp $efiparts
# mdadm --assemble --scan
```
Format the array with a FAT32 filesystem
```
# mkfs.fat -F 32 /dev/md/esp
```
## ZFS pool creation
Define the pool partitions
```
# export poolparts=""
# for disk in $disks; do
> poolpart=${disk}-part-2
> poolparts="$poolparts $poolpart"
> done
```
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file `/tmp/crypt-key.txt` with:
```
# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/tank.key && cat /tmp/tank.key
```
> Later on in the guide `clevis` will be used for automatic decryption, so this key only has to be entered a few times. However, if any changes are made to the bios or secureboot then this key will be needed again, so make sure to write it down.
2024-08-10 21:54:34 +02:00
Create the system pool
```
# zpool create -f \
-o ashift=12 \
-O compression=lz4 \
-O acltype=posix \
-O xattr=sa \
-O dnodesize=auto \
-O encryption=on \
-O keyformat=passphrase \
-O keylocation=file:///tmp/tank.key \
2024-08-10 21:54:34 +02:00
-m none \
tank raidz1 $poolparts
```
> Additionally, the `spare` option can be used to indicate spare disks. If more redundancy is preferred than `raidz2` and `raidz3` are possible [alternatives](https://openzfs.github.io/openzfs-docs/man/master/7/zpoolconcepts.7.html) for `raidz1`. If a single disk is used the `raidz` option can be left aside. For further information see [zpool-create](https://openzfs.github.io/openzfs-docs/man/master/8/zpool-create.8.html).
Then create the system datasets
```
# zfs create -o mountpoint=none tank/root
# zfs create -o canmount=noauto -o mountpoint=/ -o atime=off -o quota=24g tank/root/alpine
# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> tank/home
2024-08-10 21:54:34 +02:00
# zfs create -o mountpoint=/var -o exec=off -o setuid=off -o devices=off -o quota=16g tank/var
```
> Setting the `<home-quota>` depends on the total size of the pool, generally try to reserve some empty space in the pool.
Finally, export the zpool
```
# zpool export tank
```