ZFS
ZFS
Installation:
Antergos has the ZFS package in his repository, so it can be installed with pacman:
sudo pacman -S zfs
Updates: (read this!)
Antergos uses dkms to manage the ZFS-, and needed SPL-kernel modules.
At the moment there is no way to tell pacman which modules to build first.
Most of the time, pacman tries to build the ZFS module before building the SPL module, so it fails.
If you run critical system components on ZFS you need to recompile the ZFS module before reboot!
Rebuilding the module is as easy as reinstalling ZFS (literally!)
sudo pacman -S zfs
Extra notes:
building the ZFS modules takes time. Be patient and go to grab a cup of coffee while it builds
Sources:
thanks @karasu to provide the info here
System Rescue if you did not read Updates:
WARNING: –not fully testet !!!
- start system from Live System (install ISO)
- open a terminal:
type:
sudo modprobe zfs
sudo zpool import -f -a -R /mnt
sudo mount /dev/sda1 /mnt/boot
sudo arch-chroot /mnt
On the solution from above it is may simple can be done by reinstalling zfs:
pacman -S zfs
Needs testing if this will build spl in the right way…
May needs to reinstall spl first and then zfs ???
This will work also: (manually build the modules)
dkms install -m spl/0.7.2 -k 4.13.7-1-ARCH
dkms install -m zfs/0.7.2 -k 4.13.7-1-ARCH
But need investigation on the exact versions of SPL and ZFS !
-
After waiting till all is ready (will take a while like we say before) type
exit
. - unmount devices:
sudo umount /dev/sda1
sudo zfs unmount /mnt
Reboot system and enjoy Antergos again!
Troubleshooting:
Using the ZFS file system¹ ² for your antergos install may result in one or more of the following error messages during boot time. This article should help you understand and correctly treat these errors.
Most of the files that need to be edited and the commands that need to be run in this article require you to be root. You can edit and run commands with sudo
³.
ERROR: resume: no device specified for hibernation
Caused by a (yet) missing hibernation support in ZFS. Although by default you got a ZVOL⁴ ⁵ set up by antergos as a virtual swap partition⁶ this can not be used for hibernation/resume⁷ ⁸. You can get rid of the message though by telling your bootloader that the zfs root pool should be the swap volume. To do so you can pass the UUID of your root partition as a kernel parameter to your bootloaders config. On a default antergos install the root partition would be most probably /dev/sda3, if it’s not you should possibly know which one it is. To get the UUID of your device partitions you can issue blkid
. After getting the UUID of your zfs on root partition you can then add the following kernel parameter:
resume=UUID=Uuid-Of-Your-Root-Partition
.
For GRUB (default antergos bootloader) edit:
/etc/default/grub
and add the parameter to GRUB_CMDLINE_LINUX_DEFAULT=“quiet …”
in between the parenthesis. Run grub-mkconfig -o /boot/grub/grub.cfg
after editing the grub config file.
For Systemd edit:
/boot/loader/entries/your.conf
and add the parameter to the options
line.
Note! This will not enable hibernation, it will just get rid of the error message. If you search dmesg
you will notice that it now finds a hibernation partition but it will claim that “PM: Hibernation image not present or could not be loaded”. There is no workaround for this if you don’t have a real seperate swap partition (as you probably won’t).
unknown operand cannot open ‘yourRootPoolName’: no such pool
This is not your fault either, it happens because of formatting in the ZFS on Linux (ZOL) source code which actually has already been flattened out on the master branch a while ago. If you (still) see this you have two options:
- Wait for antergos to get rid of it in a future update
- Patch the ZFS hook used by mkinitcpio yourself
For the latter you need to edit /usr/lib/initcpio/hooks/zfs
with your favourite editor. The following changes will have to be made⁹:
<meta http-equiv="content-type" content="text html; charset="utf-8""><ol class="hoverenabled wpcustomenlighterjs enlighterjs"><li class=" odd"># Inside of zfs_mount_handler ()<li class=" even">- if ! "/usr/bin/zpool" list -H $pool 2>&1 > /dev/null ; then<li class=" odd">+ if ! "/usr/bin/zpool" list -H $pool 2>1 > /dev/null ; then<li class=" even"><li class=" odd"># The following all inside run_hook()<li class=" even">- [[ $zfs_force == 1 ]] && ZPOOL_FORCE='-f'<li class=" odd">- [[ "$zfs_import_dir" != "" ]] ...<li class=" even">+ [[ "${zfs_force}" = 1 ]] && ZPOOL_FORCE='-f'<li class=" odd">+ [[ "${zfs_import_dir}" != "" ]] ...<li class=" even"> # Double quotes and curly brackets !<li class=" odd"><li class=" even">- if [ "$root" = 'zfs' ]; then<li class=" odd">+ if [ "${root}" = 'zfs' ]; then<li class=" even"><li class=" odd">- ZFS_DATASET=$zfs<li class=" even">+ ZFS_DATASET=${zfs}</li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></li class=" even"></li class=" odd"></ol class="hoverenabled wpcustomenlighterjs enlighterjs"></meta http-equiv="content-type" content="text>
You will need to rebuild your images with mkinitcpio -p linux
after editing this file.
ZFS: No hostid found on kernel command line or /etc/hostid. ZFS pools may not import correctly
ZFS does not recognize your hostid by default. Again you have two options here:
- Pass your hostid as a kernel parameter
- Correctly generate your hostid file
In both cases you will want to issue hostid
and copy/write down/memorize the output. Then for the first option simply pass spl.spl_hostid=YourHostid
as a kernel parameter. See above for instructions on how to add a kernel parameter to your bootloader.
For the second option you will need to use a little C script you can quickly write yourself. Please refer to the excelent Arch Wiki for detailed instructions¹⁰
[FAILED] Failed to start ZFS file system shares
See ‘systemctl status zfs-share.service’ for details
This should be rare to see if you used the antergos installer for your ZFS on root installation. Happens because really, the mountpoints of the failing datasets are actually not empty. At one point maybe the system or you put stuff in there while the dataset wasn’t yet mounted by ZFS. An example could be a dataset pool/home
which mounts to /home
and several datasets with a structure of pool/userdata/documents
, pool/userdata/downloads
etc. The latter ones all mounted to /home/username/documents
and respectively.
In this example the system wrote all the user files and directories like .bash_profile, .cache etc. to the user directory inside of /home (naturally). At shutdown though when pool/home
got exported, all of these files remained. The example situation was remedied by exporting the mentioned pool, copying everything that remained to a safe location with cp -arv
, making sure everything got backed up and deleting the affected directory. Afterwards the dataset pool/userdata
was set to mountpoint /home/username
and the children just mount below that.
This is very individual in every case and can happen everytime you got files and directories written to directories that don’t export with the respective dataset. In that case review your dataset structure, export, import, review, export, import, review until you see where you have to move/remove stuff so your set will mount fine. You can refer to this comment on Github to get an idea and start reviewing your sets in rescue mode.
Have fun using ZFS on root!