I have a fresh CentOS7 installation on which ZoL 0.6.5.8 is installed; however, it seems that none of the ZFS/SPL modules are loading automatically during boot.
Each time the system boots, if I try to run 'zfs', the following error occurs:
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.
If I run 'modprobe zfs', everything loads correctly and works fine until the next time the system is rebooted, at which point nothing is loaded and I have to manually run modprobe again before ZFS will work.
On a fresh boot (when zfs hasn't loaded), there are no dmesg entries related to zfs/spl.
Looking at the state of systemd scripts:
[root@backup1 ~]# systemctl list-unit-files | grep zfs
zfs-import-cache.service enabled
zfs-import-scan.service disabled
zfs-mount.service enabled
zfs-share.service enabled
zfs-zed.service enabled
zfs.target enabled
I also tried the suggestion in the release notes to no avail: systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target
This is a fresh CentOS 7 install (in this order):
You should try moving from dkms to kmod even if it's probably not the root cause.
Follow instructions at install wiki for switching (be sure to delete the ko files).
Thanks for the suggestion.
I did a clean install of CentOS7, and used zfs-kmod instead as described here: https://github.com/zfsonlinux/zfs/wiki/RHEL-%26-CentOS
[root@backup1 ~]# yum list zfs*
.....
Installed Packages
zfs.x86_64 0.6.5.8-1.el7.centos @zfs-kmod
zfs-release.noarch 1-3.el7.centos @/zfs-release.el7.noarch
It didn't work at first.
[root@backup1 ~]# zfs list
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.
However, I think I found the solution: it seems like none of the zfs-* services are getting enabled during the package install.
Even though this is a fresh install, I ran: systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target
Then I manually forced zfs-import-scan to enabled status with: systemctl enable zfs-import-scan
Now finally, it appears that the zfs modules all load correctly at boot:
[root@backup1 ~]# lsmod | grep zfs
zfs 2713912 3
zunicode 331170 1 zfs
zavl 15236 1 zfs
zcommon 55411 1 zfs
znvpair 93227 2 zfs,zcommon
spl 92223 3 zfs,zcommon,znvpair
Actually, the zfs services don't load the modules, but only work if the modules are already loaded.
There must have been something else.
If you are willing to find out, you may reinstall a fresh VM with dkms this time and see what happens.
I tried again with DKMS and reached the same result as with the kmod test.
As a recap, here are the steps:
yum install yum-utilsyum updatepackage-cleanup --oldkernels --count=1yum install kernel-devel epel-releaseyum install http://download.zfsonlinux.org/epel/zfs-release.el7.noarch.rpmyum install zfssystemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.targetsystemctl enable zfs-import-scanAt this point, I have a viable work-around to the problem.
But there are the reproduce steps just in case there's any need for follow up. If you need me to try anything else, I'd be happy to do so.
Thanks for the work. At this point, a dev should have a look here.
@behlendorf Anyone who could assign this perhaps ?
@Samuraid so my guess is that module aren't loading because there is no etc/zfs/zpool.cache file on the system and you only have the zfs-import-cache service enabled. If you were to create a pool that result in a cache file being created and then the modules should get loaded on boot. This is controlled by the ConditionPathExists bit of the service file below. You could comment this out if you always want to modules to load even if there isn't a zfs pool on the system.
[Unit]
Description=Import ZFS pools by cache file
DefaultDependencies=no
Requires=systemd-udev-settle.service
After=systemd-udev-settle.service
After=cryptsetup.target
After=systemd-remount-fs.service
Before=dracut-mount.service
ConditionPathExists=/usr/local/etc/zfs/zpool.cache
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/sbin/modprobe zfs
ExecStart=/usr/local/sbin/zpool import -c /usr/local/etc/zfs/zpool.cache -aN
[Install]
WantedBy=zfs-mount.service
WantedBy=zfs.target
I'm seeing the same behavior, I found that /etc/zfs/zpool.cache does exist and creating a zpool before rebooting has no effect on the zpool not mounting after a reboot.
For me the fix was to enable the zfs-import-cache.service and after a reboot everything worked.
This is how the zfs-import-cache.service is coming up by default:
systemctl status -l zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; disabled; vendor preset: enabled)
Active: inactive (dead)
@Samuraid i'll close issue as stale, reopen it if you still have the problem.
Same behaviour here, CentOS Linux release 7.3.1611 (Core).
I thought I fixed this with systemctl enable zfs-import-cache.service
However that is giving me weird behaviour - zfs list shows the pool as mounted, but the pool is empty. If I export and re-import the pool, the pool shows files.
Reverting the change above, more research. Weird.
ugh. Fixed with
echo "/usr/sbin/zpool import pool-name" >> /etc/rc.local && chmod +x /etc/rc.local
It does not work for me, make new ticket: https://github.com/zfsonlinux/zfs/issues/5955
Most helpful comment
I'm seeing the same behavior, I found that
/etc/zfs/zpool.cachedoes exist and creating a zpool before rebooting has no effect on the zpool not mounting after a reboot.For me the fix was to enable the
zfs-import-cache.serviceand after a reboot everything worked.This is how the
zfs-import-cache.serviceis coming up by default: