Zfs: Cannot create zpool of zvols - "/dev/zvol/tank/zvol1p1 is missing"

Created on 25 Apr 2017  路  8Comments  路  Source: openzfs/zfs

System information


Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 17.04
Linux Kernel | 4.10.0-19-generic
Architecture | x86_64
ZFS Version | 0.6.5.9-2
SPL Version | 0.6.5.9-1

Describe the problem you're observing

I'm attempting to create a zpool from a zvol (just to learn), via zpool create -f voltank /dev/zvol/tank/zvol1. This fails with an error: missing link: zd0 was partitioned but /dev/zvol/tank/zvol1p1 is missing.

Describe how to reproduce the problem

````

zfs create -V 100Mb tank/zvol1

zpool create -f voltank /dev/zvol/tank/zvol1

missing link: zd0 was partitioned but /dev/zvol/tank/zvol1p1 is missing
That link is wrong - it should be `/dev/zvol/tank/zvol1-part1`, not `/dev/zvol/tank/zvol1p1`:
root@localhost:~# ls -l /dev/zd*
brw-rw---- 1 root disk 230, 0 Apr 25 01:35 /dev/zd0
brw-rw---- 1 root disk 230, 1 Apr 25 01:35 /dev/zd0p1
brw-rw---- 1 root disk 230, 9 Apr 25 01:35 /dev/zd0p9
root@localhost:~# ls -l /dev/zvol/tank/
total 0
lrwxrwxrwx 1 root root 9 Apr 25 01:35 zvol1 -> ../../zd0
lrwxrwxrwx 1 root root 11 Apr 25 01:35 zvol1-part1 -> ../../zd0p1
lrwxrwxrwx 1 root root 11 Apr 25 01:35 zvol1-part9 -> ../../zd0p9
````

The problem can be avoided by using /dev/zd* devices directly:
zpool create -f voltank mirror /dev/zd0

ZVOL

Most helpful comment

Just for the record, this configuration is not supported and absolutely not guaranteed to be deadlock-safe. Run the second pool inside a VM for better reliability.

All 8 comments

Just for the record, this configuration is not supported and absolutely not guaranteed to be deadlock-safe. Run the second pool inside a VM for better reliability.

This is fixed by https://github.com/zfsonlinux/zfs/commit/6bb24f4dc7b7267699e3c3a4ca1ca062fe564b9e, which is already in the 0.7.0 tagged (pre-)releases.

EDIT: 0.7.0 has now been released, closing.

Note - I'm still seeing this on 0.7.1

@stewartadam i'm not able to reproduce this on 0.7.x: can you provide the exact sequence of commands (and output) used to reproduce this on 0.7.1, kernel version and distribution information?

root@debian-9:~# modinfo zfs -F version
0.7.0-33_g08de8c16f
root@debian-9:~# 
root@debian-9:~# function is_linux() {
>    if [[ "$(uname)" == "Linux" ]]; then
>       return 0
>    else
>       return 1
>    fi
> }
root@debian-9:~# #
root@debian-9:~# # setup
root@debian-9:~# POOLNAME='testpool'
root@debian-9:~# if is_linux; then
>    TMPDIR='/var/tmp'
>    mountpoint -q $TMPDIR || mount -t tmpfs tmpfs $TMPDIR
>    zpool destroy $POOLNAME
>    fallocate -l 256m $TMPDIR/zpool_$POOLNAME.dat
>    zpool create $POOLNAME $TMPDIR/zpool_$POOLNAME.dat
> else
>    TMPDIR='/tmp'
>    zpool destroy $POOLNAME
>    mkfile 1g $TMPDIR/zpool.dat
>    zpool create $POOLNAME $TMPDIR/zpool_$POOLNAME.dat
> fi
cannot open 'testpool': no such pool
root@debian-9:~# #
root@debian-9:~# zfs create -V 128M -s $POOLNAME/zvol
root@debian-9:~# udevadm trigger
root@debian-9:~# udevadm settle
root@debian-9:~# zpool create -f voltank /dev/zvol/$POOLNAME/zvol
[   72.921339]  zd0: p1 p9
root@debian-9:~# zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
testpool   240M   246K   240M         -     1%     0%  1.00x  ONLINE  -
voltank    112M   792K   111M         -     0%     0%  1.00x  ONLINE  -
root@debian-9:~# 

So I am currently on commit b52563034230b35f0562b6f40ad1a00f02bd9a05 (encryption support) of master, I presumed that it would have changes from 0.7.1 since it was released many days prior but examining the release branch it looks like that's not the case.

When do the release branches get merged back into master?

@stewartadam we cherry-pick bugfixes from master into the zfs-0.7-release branch and tag a 0.7.x release every couple months or so. Since encryption is such a big commit, we'll probably let it sit in master until an 0.8.0 release (and not cherry-pick it for a 0.7.x release).

The problem still exists on CentOS 7 / zfs 0.7.12

OS Info

| Type | Info | |
| ---- | ---- | --- |
| OS | CentOS 7.6.1810 |
| Kernel | 3.10.0-957.1.3.el7.x86_64 |
| ZFS/SPL Version | 0.7.12 | Installed from zfs-release/dkms |

Reproduction

# create raid6 pools. can't afford to find backup unavailable when needed
zpool create pool1 raidz2 scsi{0..5}
zpool create pool2 raidz2 scsi{6..11}
zpool create pool3 raidz2 scsi{12..17}
# ...

zfs create -ps -V ${SIZE} pool1/vol1
# also for pool2 pool3..

# and we want to create a raid 0 pool on top of raid6 pools. yeah... we are seeking for both reliability and speed
zpool create pool pool1/vol1 pool2/vol2 .... # all created vols
# missing link: zd0 was partitioned but /dev/nstest1/vol1p1 is missing
lsblk | grep zd
# zd0  ...
#  +-zd0p1
#  +-zd0p9
# zd16
# zd32
# ...
ls /dev/pool1
# vol1 vol1-part1 vol1-part9

# but the problem is avoidable, like, using /dev/ nodes directly, no matter it's /dev/zd0 or /dev/pool1/vol1 or /dev/zvol/pool1/vol1
zpool create pool /dev/zd{0,16,32,...}
# (success)

Other problem

and obviously we just want a stripe pool on top of raidz2. so i read almost everything i can touch and found that there is a form of layout like this:

<pool>
    raid2-0 
        <devices...>
    raid2-1
        <devices...>
    ....
    cache
        <devices>
    spare
        <devices> 

does it do the same thing like create 3 raidz2 pool and create a stripe pool on top of that?

sorry but i'm pretty new to zol

don鈥檛 do that. It is much simpler to create a pool with more than one raidz2 config: they will be raid-0 by default.

Also this is a closed issue. You鈥檒l have better luck getting questions answered on the email list.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Tualua picture Tualua  路  54Comments

wrouesnel picture wrouesnel  路  57Comments

tycho picture tycho  路  67Comments

torn5 picture torn5  路  57Comments

runderwo picture runderwo  路  54Comments