Zfs: Ubuntu 16.04: Pool does not/cannot auto import/mount after reboot

Created on 28 Apr 2017  路  9Comments  路  Source: openzfs/zfs

System information

Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
Linux Kernel 4.4.0-62-generic
Architecture x86
ZFS Version 0.6.5.6-0ubuntu15
SPL Version 0.6.5.6-0ubuntu15

Description of problem:

Pool does not auto import/mount consistently after reboot.

Reproducing the problem:

Rebooting system

Warning/errors/backtraces from the system logs

systemctl | grep -i zfs
zed.service loaded active running ZFS Event Daemon (zed)
zfs-import-cache.service loaded failed failed Import ZFS pools by cache file
zfs-mount.service loaded active exited Mount ZFS filesystems
zfs-share.service loaded active exited LSB: Network share OpenZFS datasets.
zfs.target loaded active active ZFS startup target

systemctl status zfs-import-cache.service
zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-04-27 17:13:13 PDT; 8min ago
Process: 2070 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
Process: 1938 ExecStartPre=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
Main PID: 2070 (code=exited, status=1/FAILURE)

Apr 27 17:13:12 coventry systemd[1]: Starting Import ZFS pools by cache file...
Apr 27 17:13:13 coventry zpool[2070]: cannot import 'zpool1': one or more devices is currently unavailable
Apr 27 17:13:13 coventry systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Apr 27 17:13:13 coventry systemd[1]: Failed to start Import ZFS pools by cache file.
Apr 27 17:13:13 coventry systemd[1]: zfs-import-cache.service: Unit entered failed state.
Apr 27 17:13:13 coventry systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.

sudo /sbin/zpool import -c /etc/zfs/zpool.cache -aN
no pools available to import

$ sudo lsmod | grep -i zfs
zfs 2813952 8
zunicode 331776 1 zfs
zcommon 57344 1 zfs
znvpair 90112 2 zfs,zcommon
spl 102400 3 zfs,zcommon,znvpair
zavl 16384 1 zfs
...
cat /var/log/syslog | grep -i kernel
Apr 27 20:56:17 coventry kernel: [ 56.311632] SPL: Loaded module v0.6.5.6-0ubuntu4
Apr 27 20:56:17 coventry kernel: [ 56.391603] ZFS: Loaded module v0.6.5.6-0ubuntu15, ZFS pool version 5000, ZFS filesystem version 5
Apr 27 20:56:17 coventry kernel: [ 56.449230] igb 0000:02:00.1 enp2s0f1: igb: enp2s0f1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Apr 27 20:56:17 coventry kernel: [ 56.449426] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1: link becomes ready
Apr 27 20:56:17 coventry kernel: [ 56.755899] SPL: The /etc/hostid file is not found.
Apr 27 20:56:17 coventry kernel: [ 56.755903] SPL: using hostid 0x00000000
...
cat /var/log/syslog | grep -i systemd
Apr 27 20:56:17 coventry systemd[1]: Starting Import ZFS pools by cache file...
Apr 27 20:56:17 coventry systemd[1]: Reached target Swap.
Apr 27 20:56:17 coventry systemd[1]: Started Import ZFS pools by cache file.
Apr 27 20:56:17 coventry systemd[1]: Starting Mount ZFS filesystems...
Apr 27 20:56:17 coventry systemd[1]: Started Mount ZFS filesystems.
Apr 27 20:56:17 coventry systemd[1]: Reached target Local File Systems.

I can successfully import the pool manually using 'pool import zpool1' command.

Most helpful comment

I figured out a work around for this.

When I created the pool, I referenced disks by its name (/dev.sdx) .Read elsewhere this is a bad idea. So I exported the pool and imported back to reference the disk by its ID.The below 2 commands did the job for me.

sudo zpool export data
sudo zpool import -d /dev/disk/by-id data

Now my pools are auto-mounting on reboots.

All 9 comments

I also had this issue.

Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
Linux Kernel 4.4.0-87-generic
ZFS 0.6.5.6-0ubuntu17

By running systemctl enable zfs-import-cache I was able to fix it. The ZFS pools are available and online after a reboot now.

I have the same issue as @minorsatellite =(

I too have the same problem

Distributor ID : Ubuntu
Description : Ubuntu 16.04.3 LTS
Release : 16.04
Codename : xenial
Linux Kernel : 4.4.0-92-generic
Architecture : x86_64
ZFS Version: v0.6.5.6-0ubuntu16

Running systemctl enable zfs-import-cache doesn't make any difference. I am still having the problem on reboot

I figured out a work around for this.

When I created the pool, I referenced disks by its name (/dev.sdx) .Read elsewhere this is a bad idea. So I exported the pool and imported back to reference the disk by its ID.The below 2 commands did the job for me.

sudo zpool export data
sudo zpool import -d /dev/disk/by-id data

Now my pools are auto-mounting on reboots.

This tracker is for bugs only, please use our mailing lists for support.

Looks like the initial issuer has a problem with device paths, see @pavank 's answer.

Closed.

it happen again on my debian 9
systemctl enable zfs-import-cache won't work
my zfsutils-linux version is 0.6.5.9-5
upgrade zfsutils-linux version to 0.7.6-1~bpo9+1
still has the same problem

and thosese command won't solve the problem
sudo zpool export data
sudo zpool import -d /dev/disk/by-id data

how can we get this working using multipath, zfs runs before disks have been mapped

This did the trick for me (debian buster kernel 5.6.13 zfs 0.8.4):

systemctl list-unit-files | grep zfs | awk '{print $1}' | xargs -n 1 systemctl enable

zpool import -f poolname

reboot

Above trick worked for me an a fresh install of Debian Buster with kernel 4.19.0-12-amd64 and zfs 0.8.4-2~bpo10+1.

Was this page helpful?
0 / 5 - 0 ratings