I've created this to track progress on
http://blog.delphix.com/alex/2015/01/15/openzfs-device-removal/
If you know more, please add links or comments.
@ahrens said that outside help is needed: https://twitter.com/mahrens1/status/665006514411806720
@behlendorf is there any chance this feature can be lobbied in the developers' circles? this feature would make zfs even more awesome! it's a bit frustrating to have to destroy and recreate pools in order to remove a device or vdev from a pool. even an offline version would be sufficient, there is no need for live removal.
According to @ahrens, in addition to the original commit mentioned above:
Excuse me barging in (it's just that I could have done with this feature after accidentally adding a stripe instead of a mirror during a multi-TB data migration exercise! Set back several days now)...
Do I have it right that this is now implemented upstream?
https://rudd-o.com/linux-and-free-software/delphix-zfs-based-on-open-zfs-now-supports-device-removal
The post you referenced is correct in that it's in DelphixOS, but that's not really "upstream" for ZoL. Upstream is OpenZFS/illumos, where this feature is not yet integrated, but there is a pull request out for it: https://github.com/openzfs/openzfs/pull/251 (Note that the PR supercedes the list of Delphix commits earlier in this thread.)
We would appreciate any help with porting it to ZoL!
hrm, I should really go and learn C so I can be more useful with these things. I might set that as a goal for this year so I may be more use in the future.
Anyways in my case, the send/destroy/recreate/receive is going faster than I thought it would.
As an alternative solution to the problem I hit (which I understand to be one of the biggest ZFS gotchas), as a layer of protection against accidentally and irreversibly adding single-disc/non-redundant stripes to a pool, what about the possibility of having the zpool command refuse to stripe by default? Say, if there were a module option safe_add, if it were set to 1, zpool add and zpool create would insist on use of a new vdev group keyword "stripe" if no other vdev group keyword were given before specifying individual vdevs to add.
(I hope I have my terminology right there, correct me if wrong).
ie, instead of zpool add tank sdx, I'd have to specify zpool add tank stripe sdx if I really, really, wanted to add a nonredundant disc to my array.
Has something like this been discussed before, or do you think it's worth discussing the possibility of such a change?
(edited to fix typos and clarify)
@awnz ZFS already has this type of protection built in. zpool add will by default not let you add non-redundant (striped) vdevs to a redundant (e.g. mirror / RAIDZ) pool.
If you make a mistake with zpool create, hopefully you can realize it before putting too much data on it, and then you can start over with zpool create mirror ....
# zpool list -v test
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test 7.94G 108K 7.94G - 0% 0% 1.00x ONLINE -
mirror 7.94G 108K 7.94G - 0% 0%
c1t1d0 - - - - - -
c1t2d0 - - - - - -
# zpool add test c1t3d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses mirror and new vdev is disk
Oh crap, did I do -f? (history | grep "zfs add") CRAP, I did.
Oh now I feel stupid. I must have misread the error message. I'm going to quietly sneak out of the room now. Lesson learned: read and understand those error messages, don't be so quick to use -f
This was on home data. Lesson learned for when I implement this at work...
Closing. The device removal feature has been merged to OpenZFS and is being ported to ZoL in PR #6900.
Most helpful comment
Closing. The device removal feature has been merged to OpenZFS and is being ported to ZoL in PR #6900.