Zfs: Miscalculating dedup vdev

Created on 27 May 2019  ยท  17Comments  ยท  Source: openzfs/zfs

0.8

I've miscaluclated the space required for a dedup vdev.

What is the best (recomended) method to add more dedup space.

thanks,

-rick

Question

All 17 comments

zpool add tank dedup mirror sdo sdp

so I can't replace it with something bigger, is mirroring the only way to
"add"

thanks,

-rick

On Sun, May 26, 2019 at 6:49 PM Alexey Smirnoff notifications@github.com
wrote:

zpool add tank dedup mirror sdo sdp

โ€”
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQCU7RVBNZ5BZ6S7VELPXM427A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWISORA#issuecomment-496052036,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQHNP3S4RA7FQ4D5DZ3PXM427ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

if sdo already exists as the dedup device and sdp is 8x larger, will the
space on sdp be waisted? or unavailable?

thanks,

-rick

On Sun, May 26, 2019 at 9:44 PM Rick Wesson rick@support-intelligence.com
wrote:

so I can't replace it with something bigger, is mirroring the only way to
"add"

thanks,

-rick

On Sun, May 26, 2019 at 6:49 PM Alexey Smirnoff notifications@github.com
wrote:

zpool add tank dedup mirror sdo sdp

โ€”
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQCU7RVBNZ5BZ6S7VELPXM427A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWISORA#issuecomment-496052036,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQHNP3S4RA7FQ4D5DZ3PXM427ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

have you tried zpool replace?

-- richard

On May 26, 2019, at 9:47 PM, Rick Wesson notifications@github.com wrote:

if sdo already exists as the dedup device and sdp is 8x larger, will the
space on sdp be waisted? or unavailable?

thanks,

-rick

On Sun, May 26, 2019 at 9:44 PM Rick Wesson rick@support-intelligence.com
wrote:

so I can't replace it with something bigger, is mirroring the only way to
"add"

thanks,

-rick

On Sun, May 26, 2019 at 6:49 PM Alexey Smirnoff notifications@github.com
wrote:

zpool add tank dedup mirror sdo sdp

โ€”
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQCU7RVBNZ5BZ6S7VELPXM427A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWISORA#issuecomment-496052036,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQHNP3S4RA7FQ4D5DZ3PXM427ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/
โ€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

I am so interested in understanding the repercussions of doing something,
before it is done. Seems the code is the only documentations. So I thought
I would ask what the implications, where does the data go when one removes
a vdev of class dedup?

-rick

On Sun, May 26, 2019 at 10:46 PM Richard Elling notifications@github.com
wrote:

have you tried zpool replace?

-- richard

On May 26, 2019, at 9:47 PM, Rick Wesson notifications@github.com
wrote:

if sdo already exists as the dedup device and sdp is 8x larger, will the
space on sdp be waisted? or unavailable?

thanks,

-rick

On Sun, May 26, 2019 at 9:44 PM Rick Wesson <
[email protected]>
wrote:

so I can't replace it with something bigger, is mirroring the only way
to
"add"

thanks,

-rick

On Sun, May 26, 2019 at 6:49 PM Alexey Smirnoff <
[email protected]>
wrote:

zpool add tank dedup mirror sdo sdp

โ€”
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQCU7RVBNZ5BZ6S7VELPXM427A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWISORA#issuecomment-496052036
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AAHFFQHNP3S4RA7FQ4D5DZ3PXM427ANCNFSM4HPX43TA

.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/
โ€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

โ€”
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQAU6WOOINHUQA6SYOTPXNYT5A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWI2RSA#issuecomment-496085192,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQFFYVQ7BEXQVY7YOFTPXNYT5ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

using your suggestion when sdo is currently part of an active pool's dedup device will using the -f option add sdp as a mirror of sdo I get the following error:

use '-f' to override the following errors:
/dev/sdo is part of active pool 'pool2'

zpool add -f tank dedup mirror /dev/sdo /dev/sdp
invalid vdev specification
the following errors must be manually repaired:
/dev/sdo is part of active pool 'pool2'

My current pool... nvme0n1p2 needs to be replaced with a larger device, the above advice did not work even when using the -f option

zpool list -v pool2
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool2 130T 29.5T 101T - - 9% 22% 1.86x ONLINE -
raidz1 130T 29.4T 101T - - 9% 22.6% - ONLINE
a00 - - - - - - - - ONLINE
a01 - - - - - - - - ONLINE
a02 - - - - - - - - ONLINE
a03 - - - - - - - - ONLINE
a04 - - - - - - - - ONLINE
a05 - - - - - - - - ONLINE
a06 - - - - - - - - ONLINE
a07 - - - - - - - - ONLINE
a08 - - - - - - - - ONLINE
a09 - - - - - - - - ONLINE
a10 - - - - - - - - ONLINE
a11 - - - - - - - - ONLINE
dedup - - - - - - - - -
nvme0n1p2 93G 48.7G 44.3G - - 94% 52.3% - ONLINE
special - - - - - - - - -
nvme0n1p3 93G 10.2G 82.8G - - 37% 10.9% - ONLINE
logs - - - - - - - - -
nvme0n1p1 8.50G 40.3M 8.46G - - 0% 0.46% - ONLINE

zpool add tank dedup mirror sdo sdp

this advice did not work

Below is the relevant history for the pool in question. I would now like to
remove nvme0n1p2 but

zpool detach pool2 /dev/nvme0n1p2
cannot detach /dev/nvme0n1p2: only applicable to mirror and replacing vdevs

zpool remove pool2 /dev/nvme0n1p2
cannot remove /dev/nvme0n1p2: invalid config; all top-level vdevs must have
the same sector size and not be raidz.

It also appears that I can't turn the dedup into a mirror... Is it possible
to remove nvme0n1p2 and if so how?

2019-05-25.11:13:00 zpool set feature@allocation_classes=enabled pool2
2019-05-25.13:22:18 zpool add -f pool2 dedup /dev/nvme0n1p2
2019-05-25.13:44:38 zpool add -f pool2 special /dev/nvme0n1p3
2019-05-27.20:30:02 zpool import pool2
2019-05-29.09:22:23 zpool add pool2 dedup /dev/nvme1n1p1
zpool status pool2
pool: pool2
state: ONLINE
scan: none requested
config:

    NAME         STATE     READ WRITE CKSUM
    pool2        ONLINE       0     0     0
      raidz1-0   ONLINE       0     0     0
        a00      ONLINE       0     0     0
        a01      ONLINE       0     0     0
        a02      ONLINE       0     0     0
        a03      ONLINE       0     0     0
        a04      ONLINE       0     0     0
        a05      ONLINE       0     0     0
        a06      ONLINE       0     0     0
        a07      ONLINE       0     0     0
        a08      ONLINE       0     0     0
        a09      ONLINE       0     0     0
        a10      ONLINE       0     0     0
        a11      ONLINE       0     0     0
    dedup
      nvme0n1p2  ONLINE       0     0     0
      nvme1n1p1  ONLINE       0     0     0
    special
      nvme0n1p3  ONLINE       0     0     0
    logs
      nvme0n1p1  ONLINE       0     0     0

errors: No known data errors

On Wed, May 29, 2019 at 9:38 AM kpande notifications@github.com wrote:

you must use zpool attach if you are trying to turn a singleton vdev into
a mirror..

โ€”
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQA54J5HYFP2AUV2BUDPX2WQBA5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWP5MXI#issuecomment-497014365,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQEHZ7FBZQAO6IIRCNDPX2WQBANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

I would now like to remove nvme0n1p2

@wessorh I'm sorry you ran in to this and ended up in this situation, but given your pool configuration you will not be able to remove the nvme0n1p2 vdev. This is because one of the restrictions of zpool remove is that the primary pool storage may not contain a raidz vdev.

While it looks like vdev removal would be possible. The existing code is not able to recognize that a dedup vdev is being evacuated and that there is sufficient space for it on an alternate dedup vdev. Or in other words, that it should not need to evacuate any data to the raidz device.

But to go back to your original question. The recommended procedure would be to use zpool replace to swap the existing dedup device with one of larger capacity. Then to use zpool online -e to expand the replaced vdev to use all of the available capacity.

$ truncate -s 2G /var/tmp/vdev1 /var/tmp/vdev2 /var/tmp/vdev3
$ sudo zpool create tank /var/tmp/vdev1 /var/tmp/vdev2 dedup /var/tmp/vdev3
$ zpool list -v
NAME               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank              5.62G   813K  5.62G        -         -     0%     0%  1.00x    ONLINE  -
  /var/tmp/vdev1  1.88G   275K  1.87G        -         -     0%  0.01%      -  ONLINE  
  /var/tmp/vdev2  1.88G   538K  1.87G        -         -     0%  0.02%      -  ONLINE  
dedup                 -      -      -        -         -      -      -      -  -
  /var/tmp/vdev3  1.88G      0  1.88G        -         -     0%  0.00%      -  ONLINE 

$ truncate -s 4G /var/tmp/vdev4
$ sudo zpool replace tank /var/tmp/vdev3 /var/tmp/vdev4
$ zpool list -v
NAME               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank              5.62G  1.79M  5.62G        -         -     0%     0%  1.00x    ONLINE  -
  /var/tmp/vdev1  1.88G  1.03M  1.87G        -         -     0%  0.05%      -  ONLINE  
  /var/tmp/vdev2  1.88G   778K  1.87G        -         -     0%  0.03%      -  ONLINE  
dedup                 -      -      -        -         -      -      -      -  -
  /var/tmp/vdev4  1.88G      0  1.88G        -        2G     0%  0.00%      -  ONLINE  
                                                ^^^^^^^^

$ sudo zpool online -e tank /var/tmp/vdev4
$ zpool list -v
NAME               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank              7.62G   946K  7.62G        -         -     0%     0%  1.00x    ONLINE  -
  /var/tmp/vdev1  1.88G   571K  1.87G        -         -     0%  0.02%      -  ONLINE  
  /var/tmp/vdev2  1.88G   376K  1.87G        -         -     0%  0.01%      -  ONLINE  
dedup                 -      -      -        -         -      -      -      -  -
  /var/tmp/vdev4  3.88G      0  3.88G        -         -     0%  0.00%      -  ONLINE  
                  ^^^^^

I wanted to also mention that the dedicated dedup and special vdevs should be configured with the required level of redundancy. In your configuration if one of them were to fail you would no longer be able to import the pool. For this reason it's suggested that you configure them as mirrored pairs. e.g.

  dedup
    mirror-1
      nvme0n1p2 ONLINE 0 0 0
      nvme1n1p1 ONLINE 0 0 0

The reason I ended up here is because there is no documentation on
appropriate procedures. What is the recommendation, rebuild the pool? Oh,
I'll just throw 10K at the problem. thanks everyone for writing some neat
cool code that fucked my storage.

thanks!

-rick

On Wed, May 29, 2019 at 11:33 AM Brian Behlendorf notifications@github.com
wrote:

I would now like to remove nvme0n1p2

@wessorh https://github.com/wessorh I'm sorry you ran in to this and
ended up in this situation, but given your pool configuration you will not
be able to remove the nvme0n1p2 vdev. This is because one of the
restrictions of zpool remove is that the primary pool storage may not
contain a raidz vdev.

While it looks like vdev removal would be possible. The existing code is
not able to recognize that a dedup vdev is being evacuated and that there
is sufficient space for it on an alternate dedup vdev. Or in other words,
that it should not need to evacuate any data to the raidz device.

But to go back to your original question. The recommended procedure would
be to use zpool replace to swap the existing dedup device with one of
larger capacity. Then to use zpool online -e to expand the replaced vdev
to use all of the available capacity.

$ truncate -s 2G /var/tmp/vdev1 /var/tmp/vdev2 /var/tmp/vdev3
$ sudo zpool create tank /var/tmp/vdev1 /var/tmp/vdev2 dedup /var/tmp/vdev3
$ zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 5.62G 813K 5.62G - - 0% 0% 1.00x ONLINE -
/var/tmp/vdev1 1.88G 275K 1.87G - - 0% 0.01% - ONLINE
/var/tmp/vdev2 1.88G 538K 1.87G - - 0% 0.02% - ONLINE
dedup - - - - - - - - -
/var/tmp/vdev3 1.88G 0 1.88G - - 0% 0.00% - ONLINE

$ truncate -s 4G /var/tmp/vdev4
$ sudo zpool replace tank /var/tmp/vdev3 /var/tmp/vdev4
$ zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 5.62G 1.79M 5.62G - - 0% 0% 1.00x ONLINE -
/var/tmp/vdev1 1.88G 1.03M 1.87G - - 0% 0.05% - ONLINE
/var/tmp/vdev2 1.88G 778K 1.87G - - 0% 0.03% - ONLINE
dedup - - - - - - - - -
/var/tmp/vdev4 1.88G 0 1.88G - 2G 0% 0.00% - ONLINE
^^^^^^^^

$ sudo zpool online -e tank /var/tmp/vdev4
$ zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 7.62G 946K 7.62G - - 0% 0% 1.00x ONLINE -
/var/tmp/vdev1 1.88G 571K 1.87G - - 0% 0.02% - ONLINE
/var/tmp/vdev2 1.88G 376K 1.87G - - 0% 0.01% - ONLINE
dedup - - - - - - - - -
/var/tmp/vdev4 3.88G 0 3.88G - - 0% 0.00% - ONLINE
^^^^^

I wanted to also mention that the dedicated dedup and special vdevs
should be configured with the required level of redundancy. In your
configuration if one of them were to fail you would no longer be able to
import the pool. For this reason it's suggested that you configure them as
mirrored pairs. e.g.

dedup
mirror-1
nvme0n1p2 ONLINE 0 0 0
nvme1n1p1 ONLINE 0 0 0

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQBNXCZ2NB6ZXC6QRO3PX3D55A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWQH3UY#issuecomment-497057235,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQH5HWYR7WAOJQPF5TLPX3D55ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

@wessorh while the dedup device cannot be removed, there are solid alternatives to rebuilding the pool.

One option would be to replace the nvme0n1p2 and nvme1n1p1 with different devices. Any ssd should be sufficient to see a significant performance improvement when using dedup. This can be accomplished with zpool replace pool2 nvme0n1p2 <new device>.

Additionally, the dedup devices can be converted to mirrors using zpool attach pool2 nvme0n1p2 <new device> to restore the redundancy.

  pool: dozer
 state: ONLINE
  scan: resilvered 0B in 0 days 00:00:01 with 0 errors on Wed May 29 13:59:44 2019
config:

    NAME                STATE     READ WRITE CKSUM
    dozer               ONLINE       0     0     0
      raidz1-0          ONLINE       0     0     0
        /var/tmp/vdev1  ONLINE       0     0     0
        /var/tmp/vdev2  ONLINE       0     0     0
        /var/tmp/vdev3  ONLINE       0     0     0
    dedup   
      /var/tmp/vdev4    ONLINE       0     0     0
      /var/tmp/vdev5    ONLINE       0     0     0
$ truncate -s 2G /var/tmp/vdev4a /var/tmp/vdev4b /var/tmp/vdev5a /var/tmp/vdev5b
$ sudo zpool attach dozer /var/tmp/vdev4 /var/tmp/vdev4b
$ sudo zpool attach dozer /var/tmp/vdev5 /var/tmp/vdev5b
$ sudo zpool replace dozer /var/tmp/vdev4 /var/tmp/vdev4a
$ sudo zpool replace dozer /var/tmp/vdev5 /var/tmp/vdev5a

$ sudo zpool status -v dozer
  pool: dozer
 state: ONLINE
  scan: resilvered 0B in 0 days 00:00:01 with 0 errors on Wed May 29 14:02:01 2019
config:

    NAME                 STATE     READ WRITE CKSUM
    dozer                ONLINE       0     0     0
      raidz1-0           ONLINE       0     0     0
        /var/tmp/vdev1   ONLINE       0     0     0
        /var/tmp/vdev2   ONLINE       0     0     0
        /var/tmp/vdev3   ONLINE       0     0     0
    dedup   
      mirror-1           ONLINE       0     0     0
        /var/tmp/vdev4a  ONLINE       0     0     0
        /var/tmp/vdev4b  ONLINE       0     0     0
      mirror-2           ONLINE       0     0     0
        /var/tmp/vdev5a  ONLINE       0     0     0
        /var/tmp/vdev5b  ONLINE       0     0     0

We'd welcome any suggestions or help with further improving the available documentation.

solid?

Wait, you mean you are blaming freshly released code for your mucking about and messing up a production pool? The reason you ended here is because you made irreversible changes to a production pool without testing/staging/figuring out what the docs say and what people have been doing for the past 6 months. I'm sure it's annoying but what is done is done. I mean if there's not enough documentation for you why would you plow ahead anyway?

I am dyslexic, I have not ever read the documentation.

On Thu, May 30, 2019 at 5:38 AM beren12 notifications@github.com wrote:

Wait, you mean you are blaming freshly released code for your mucking
about and messing up a production pool? The reason you ended here is
because you made irreversible changes to a production pool without
testing/staging/figuring out what the docs say and what people have been
doing for the past 6 months. I'm sure it's annoying but what is done is
done. I mean if there's not enough documentation for you why would you plow
ahead anyway?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQAYGEERRMISP4532GDPX7DD7A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWSGYXQ#issuecomment-497314910,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQAD4QTVHFWM3VZUI7LPX7DD7ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

I'm taking all the sights off my weapons. I'm jut not going to use optics

On Fri, May 31, 2019 at 8:15 PM kpande notifications@github.com wrote:

you should probably delegate the task to someone who has the capacity to
read documentation.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQDPL64XH7TWXAQWGKLPYHSVRA5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWWXPHQ#issuecomment-497907614,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQHW7AD4KHMSBAV6YWTPYHSVRANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

Agreed, when you have code like this, who need optics?

pool: pool8
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Tue Jan 1 13:40:39 2019
627G scanned at 58.2M/s, 297G issued at 149K/s, 53.4T total
29.6G resilvered, 0.54% done, no estimated completion time
config:

    NAME                        STATE     READ WRITE CKSUM
    pool8                       DEGRADED     0     0     0
      raidz1-0                  DEGRADED     0     0     0
        sdbh                    ONLINE       0     0     0
        sdbd                    ONLINE       0     0     0
        sdbm                    ONLINE       0     0     0
        sdbk                    ONLINE       0     0     0
        sdbp                    ONLINE       0     0     0
        sdbl                    ONLINE       0     0     0
        replacing-6             DEGRADED     0     0     0
          10001100110868462260  UNAVAIL      0     0     0  was

/dev/sdq1/old
sdcc ONLINE 0 0 0
sdbq ONLINE 0 0 0
sdbo ONLINE 0 0 0
sdbr ONLINE 0 0 0

On Sun, Jun 2, 2019 at 12:39 PM kpande notifications@github.com wrote:

given today's climate perhaps it's not the best idea to use weapons as a
metaphor (@behlendorf https://github.com/behlendorf @ahrens
https://github.com/ahrens )

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/8821?email_source=notifications&email_token=AAHFFQDQQQJW3PRQPT7QLCDPYQOW7A5CNFSM4HPX43TKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWX4QTQ#issuecomment-498059342,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHFFQAGCMOWW6DWAOOWMRLPYQOW7ANCNFSM4HPX43TA
.

--

Rick Wesson
CEO, Support Intelligence
blog: https://cyberwarhead.com
Project Infected: hxxp://icewater.io/

Looks like a support question, and answer was given, and we're already went off topic too. @wessorh please use mailing lists for other support questions https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists , issue tracker is for bugs and features.

If you have other problems - feel free to open new issues, or find already opened ones on needed topics via search.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

RNCTX picture RNCTX  ยท  3Comments

seonwoolee picture seonwoolee  ยท  3Comments

FransUrbo picture FransUrbo  ยท  4Comments

nh2 picture nh2  ยท  3Comments

nwf picture nwf  ยท  4Comments