This is a feature request for an implementation of the BTRFS_IOC_CLONE in zfsonlinux, or something similar, so that it's possible to COW-copy a single file in zero space, zero RAM and zero time, without having to enable super-expensive things like filesystem-wide deduplication (which btw is not zero RAM or zero time).
If it can be done at the directory level, so to clone entire directory trees with one call, even better.
On the mailing list, doubts were raised regarding semantics on:
Firstly, I don't expect this to work across datasets. Secondly, I'd suggest that the same semantics of deduplication are used. It should just be a shortcut of 1) enabling deduplication + 2) copying the file by reading it byte-by-byte and writing it byte-by-byte elsewhere.
If you can implement exactly the BTRFS_IOC_CLONE, the same cp with --reflink that works for btrfs could be used here. If the IOC is different, we will also need patches to the cp program or another another cp program.
Link to zfs-discuss thread for this feature request:
http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/9af181ed0129b77c#
I have my doubts regarding whether this is needed -- because the functionality is already available at the dataset(filesystem) level, and ZFS is intended to be implemented with fine-grained datasets(filesystems).
COW is already implemented across specific datasets -- e.g. clones or datasets with snapshots (promoted or otherwise). Therefore, I propose a more generally useful version of this request: implement or allow COW for all copy operations in the same pool base on a setting implemented at both the dataset and pool level.
Just to reiterate here what has been discussed in https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups=#!msg/zfs-discuss/mvGB7QEpt3w/r4xZ3eD7nn0J -- snapshots, dedup and so on already provide _most_ of the benefits of COW hardlink breaking, but not all.
Specifically, binaries and libs with the same inode number get mapped to the same memory locations for execution, which results in memory savings. This matters a lot with container based virtualisation where you may have dozens or hundreds of identical copies of libc6 or sshd or apache in memory. KSM helps there, but it needs madvise() and is not free (it costs CPU at runtime).
COW hardlinks are essentially free.
linux-vserver (http://linux-vserver.org/) already patches ext[234] and I think jfs to enable this sort of COW hardlink breaking functionality by abusing the immutable attribute and an otherwise unused attribute linux-vserver repurposes as "break-hardlink-and-remove-immutable-bit-when-opened-for-writing".
It would be great to have the same feature in zfsonlinux; its lack is one of the only reasons I can't put my vservers volume on zfs (the lack of posix or nfsv4 acl support is the oter reason).
So, to clarify, I'm requesting the following semantics:
This wouldn't break existing applications because the feature would not be enabled by default (you'd have to set the special xattr on a file to use it).
As was brought up in the thread we currently using http://www.xmailserver.org/flcow.html on ext4 for file/dir level COW. This works, but we would much prefer if we were using ZFS to have the filesystem take care of COW goodness. (For our narrow use case we can probably do everything we need to do with a filesystem per directory, but having our code just work with `cp would be nice to have.)
I would like to bump for this feature. When I submitted it 2 years ago there were like 30 issues before it, now there are like 500. It's moving farther and farther with issues being created before it like in the expansion of the universe. I do understand this a feature request and not a bugfix but it would make ZFS a helluva lot more appealing for people.
Snapshotting the whole ZFS filesystem to achieve the clone of 1 file is definitely overkill, cannot substitute.
One of the problems with implementing this is that the directory entries are implemented as name-value pairs, which at a glance, provides no obvious way of doing this. I just noticed today that value is divided into 3 sections. The top 4 bits indicate the file type, the bottom 48-bits are the actual object and the middle 12-bits are unused. One of those unused bits could be repurposed to implement reflinks.
Implementing reflinks would require much more than marking a bit, but the fact that we have spare bits available should be useful information to anyone who decides to work on this.
Hi Ryao, thanks for noticing this :-) If deduplication does not need such bits or a different directory structure, then also reflink should not need them. I see the reflink as a way to do copy+deduplicate a specific file without the costs associated with copy and with deduplication, but with the same final result... Is it not so?
@torn5 I believe you're correct, this could be done relatively easily by leveraging dedup. Basically, the reflink ioctl() would provide an user interface for per-file deduplication. As long as we're talking about a relatively small number of active files the entire deduplication table should be easily cachable and performance will be good. If implemented this way we'd inherit the existing dedup behavior for quotas and such. This makes the most sense for individual files in larger filesystems, for directories creating a new dataset would still be best.
Here is a scenario that I think this feature would be very helpful for:
I take regular snapshots of my video collection. Because of COW, these snapshots do not take any space. However, a (young relative|virus|hostile alien) comes for a visit and deletes some videos from my collection, and I would like to recover them from my handy snapshots. If I use cp normally, each recovered video is duplicated in snapshots and in the active space. With cp --reflink, the file system would be signaled to COW the file to a new name, without taking any additional space, along with making recovery instantaneous.
As an aside, is there a way to scan a ZFS volume and run an offline deduplication? If I had copied the data, is there a way to recover the space other than deleting all snapshots that contained the data?
On Thu, Jan 16, 2014 at 05:16:38PM -0800, hufman wrote:
I take regular snapshots of my video collection. Because of COW, these
snapshots do not take any space. However, a (young relative|virus|hostile
alien) comes for a visit and deletes some videos from my collection, and I
would like to recover them from my handy snapshots. If I use cp normally,
each recovered video is duplicated in snapshots and in the active space.
With cp --reflink, the file system would be signaled to COW the file to a
new name, without taking any additional space, along with making recovery
instantaneous.
I'm not sure I see how that would work; it would need cross-filesystem
reflink support (since you'd be copying out of a snapshot and into a real
fs).
Normally, to recover from such situations, you'd just roll back to the
latest snapshot that still has the missing data. Of course, if you'd like to
keep some of the changes made since then, this is less than ideal.
If this is a frequent occurrence, maybe you'd like to turn on deduplication.
In that case, copying the files out of the snapshot will not use extra
space.
As an aside, is there a way to scan a ZFS volume and run an offline
deduplication?
None that I know of. What you could do is: enable dedup, then copy each file
to a new name, remove the original file and rename the new file to the old
name. This obviously does nothing to deduplicate existing snapshots.
If I had copied the data, is there a way to recover the
space other than deleting all snapshots that contained the data?
Other than rolling back to a snapshot, no, I don't think so.
Andras
E = mc^2 + 3d6
Thank you for your respose!
My use-case for reflink is that we are building a record and replay debugging tool (https://github.com/mozilla/rr) and every time a debuggee process mmaps a file, we need to make a copy of that file so we can map it again later unchanged. Reflink makes that free. Filesystem snapshots won't work for us; they're heavier-weight, clumsy for this use-case, and far more difficult to implement in our tool.
I also have a use for this, I use different zfs filesystems to control differing IO requirements. Currently for my app to move items between these filesystems it must do a full copy, which works, but makes things fairly unresponsive fairly regularly, as it copies dozens of gigabytes of data. I would consider using deduplication, but I'm fairly resource constrained as it is.
I would like to do this as my GSoC 2014 project, but I don't know whether ZFSOnLinux participate in GSoC.
ZFSOnLinux is not participating, but Gentoo is. Put together a proposal and I and others will review it. If it looks good, I am willing to mentor this.
As much as I want this feature, I worry that this might constitute gsoc
abuse in some way. pease be sure to read the rules carefully.
On Mar 11, 2014 5:27 PM, "Richard Yao" [email protected] wrote:
ZFSOnLinux is not participating, but Gentoo is. Put together a proposal
and I and others will review it. If it looks good, I am willing to mentor
this.—
Reply to this email directly or view it on GitHubhttps://github.com/zfsonlinux/zfs/issues/405#issuecomment-37363140
.
@ryao Hmm, the illumos ideas page1 says I could also proposal ideas for OpenZFS. Isn't ZoL part of OpenZFS?
@yshui OpenZFS is an umbrella project for the various ZFS platforms. It is not directly participating in the GSoC, but some of its member platforms are. Illumos and FreeBSD are participating in the GSoC. ZoL is not, but can indirectly through Gentoo, which is also participating in the GSoC.
@ryao I see. Sorry I didn't read your comment before reply to your email.
If someone does decide to work on this I'm happy to help point them in the right direction.
I had jotted down a possible way of doing this in #2554 in response to an inquiry about btrfs-style deduplication via reflinks. I am duplicating it here for future reference:
Directories in ZFS are name-value pairs. Adding reflinks to that is non-trivial. One idea that might work would be to replace the block pointers in the indirection tree with object identifiers and utilize ZAP to store mappings between the object id and a (reference count,block pointer) tuple. That would involve a disk format change and would only apply to newly created files. Each block access would suffer an additional indirection. Making reflinks under this scheme would require duplicating the entire indirection tree and updating the corresponding ZAP entries in a single transaction group.
Thinking about it, the indirection tree itself is always significantly smaller than the data itself, so the act of writing would be bounded to some percentage of the data. We already suffer from this sort of penalty from the existing deduplication implementation, but at a much higher penalty as we must perform a random seek on each data access. Letting users copy only metadata through reflinks seems preferrable to direct data copies by shuffling data through userland. This could avoid the penalties in the presence of dedup=on because all of the data has been preprocessed by our deduplication algorithm.
That being said, the DDT has its own reference counts for each block, so we would either need to implement this in a way compatible with that or change it. We also need to consider the interaction with snapshots.
Here is a technique that might be possible to apply:
https://www.usenix.org/legacy/event/lsf07/tech/rodeh.pdf
There are a few caveats:
I'm late to this party but I want to give a definitive and real use-case for this that is not satisfied by clones.
We have a process sandbox which imports immutable files. Ideally, each imported file may be modified by the process but those modifications shouldn't change the immutable originals. Okay, that could be solved with a cloned file system and COW. However, from this sandbox we also want to extract (possibly very large) output files. This throws a wrench in our plans. We can't trivially import data from a clone back into the parent file system without doing byte-by-byte copy.
I think this is a problem worth solving. Clones (at least, as they currently exist) are not the right answer.
Sorry if I lost something in this long range discussion, but it seems to me that everybody is thinking in snapshots, clones, dedup, etc.
I am, personally, a big fan of dedup. But I know it has many memory drawbacks, because it is done online. In BTRFS and WAFL, dedup is done offline. Thus, all this memory is only used during the dedup process.
I think that the original intent of this request is to add "clone/dedup" functionality to cp. But not by enabling online dedup to the filesystem, nor by first copying and them deduping the file. Let the filesystem just create another "file" instance, in which its data is a set of CoW sectors from another file.
Depending on how ZFS adjusts data internally, I can imagine this even being used to "move" files between filesystems on the same pool. No payload disk block need to be touched, only metadata.
Ok, there are cases in which blocksize, compression, etc are setup differently. But IIRC, these are only "valid" for newer files. Old files will keep what is already on disk. So, it appears to me that this not a problem. Maybe crypto, as @darthur has already told at 2011, which is NOT even a real feature yet...
There's already such a feature in WALF. Please, take a look at this: http://community.netapp.com/t5/Microsoft-Cloud-and-Virtualization-Discussions/What-is-SIS-Clone/td-p/6462
Let's start cloning single files!!!
@dioni21 perhaps you should consider a zvol formatted with btrfs.
I'm sorry to be the annoying guy who brings up old threads, but I found this issue and it seems to be really useful.
So I just wanted to ask if this is scheduled for any release? It may not be part of the 0.7 but I'd love to see it after that.
So is there anyone else working on this feature? And are there even plans to implement it? Now that encryption will make it into 0.7 (I think), this is another factor to think of.
This feature would be perfect for our needs.
A user prepares large files for a project.
The user submits those files.
Those files are currently copied to a directory
for QA and other processing.
Most of the time, the files are OK and do not have to be re-submitted.
However, because of the copy we have massive duplication.
Using zfs snapshots do not really work for us.
We would have to make a huge number of them
and keep them around forever.
But a reflink copy would be perfect.
@galt: While I agree that the COW feature outlined here would be great to have, it seems for your use case turning on de-duplication on the zfs dataset containing the originals & copies would already effectively save you the space currently occupied by 'un-needed' identical copies. Or did I miss anything?
Isn’t deduplication extremely RAM intensive?
Yes.
Instead of just copying the file, I would call it with
cp --reflink=always source target
which would mean that it operates at the whole file level.
This is quite different from turning on zfs dedup feature and spending 1GB/TB on RAM for dedup hash.
Also, it is a super fast operation since only some metadata need be copied
to make the reflink.
The feature has already been implemented on XFS (at least for testing) since January 2017.
http://strugglers.net/~andy/blog/2017/01/10/xfs-reflinks-and-deduplication/
APFS: APple File System has just been released 25 Sept 2017 on OS High Sierra and iOS.
It has many great features of zfs and btrfs and ocfs2 like copy-on-write, reflinks,
snapshots and clones.
Quote:
[...] one I know that works also in XFS is duperemove.
https://github.com/markfasheh/duperemove
You will need to use a git checkout of duperemove for this to work.
Shouldn’t this feature technically be implemented upstream, via openzfs project? This doesn’t seem like a ZFS on Linux issue...
Second question: is it even practical to implement given ZFS design? Based on what I’ve read, reflinks were never really part of ZFS design to begin with and may therefore not be preactical to implement...
First question: good point. openzfs it should be.
Second question: most of the copy-on-write machinery needed is already in place.
It is fairly simple to make a reflink copy, compared to making snapshots and clones.
It basically just has to duplicate the inode and its associated meta data (but not
the actual data blocks of the file). Maybe it needs to update some flags or counts
on the inodes and blocks.
Can somebody point me to the openzfs repository?
I do not see one.
This page implies there isn't one:
http://open-zfs.org/wiki/FAQ#Do_you_plan_to_release_OpenZFS_under_a_license_other_than_the_CDDL.3F
QUOTE:
Why are there four different repositories?
Each repository supports a different operating system. Even though the core of OpenZFS is platform-independent, there are a significant number of platform-specific changes need to be maintained for the parts of ZFS which interact with the rest of the operating system (VFS, memory management, disk i/o, etc.).
Are new features and improvements shared between the different repositories?
Yes. Each implementation regularly ports platform-independent changes from the other implementations. One of the goals of OpenZFS is to simplify this porting process.
ryao wanted to add reflink to zfs in 2014.
https://lists.gt.net/gentoo/dev/285286?do=post_view_threaded
@galt Looks like there is no "main" OpenZFS repository. I suspect the closest thing you'd find to that would be the oringial OpenSolaris ZFS build, which would be the Illumos and/or OpenIndiana flavours. Having said that, it looks like there is an effort (or interest thereof) to unify the code (core ZFS code and various platform-specific porting layers): http://open-zfs.org/wiki/Reduce_code_differences
That's bad... imagine if the ZoL team figures out reflinks, I would assume it would be non-trivial to port that feature to Illumos or FreeBSD, for example. I hope the above unification initiative takes off, otherwise the ZFS codebase will become too fragmented and difficult to support in the long run... :disappointed:
@galt @rouben there is no problem at all with different repos etc, it's just non-trivial to add this functionality in ZFS.
The upstream OpenZFS repository is located at https://github.com/openzfs/openzfs. Features developed on Linux, FreeBSD, and Illumos are feed back upstream as appropriate to this repository. Each platform then pulls back down the changes they need.
Regarding reflink support this is something which has discussed at previous OpenZFS developer summits and several possible designs have been proposed. It's definitely doable, but we want to do it in a, efficient portable way which ideally all the platforms can take advantage of.
I should have added that anyone who is interested is more than welcome to join us at the upcoming 2017 OpenZFS Developer Summit where we'll be discussing all things OpenZFS!
fwiw oracle has done this on internally, see https://www.youtube.com/watch?v=c1ek1tFjhH8#t=18m55s for a slide referring to this (there is also a q&a later in the talking confirming this is reflink)
looks like the question about reflink is around 43m45s.
2017-11-22 16:07 GMT-08:00 Chris Wedgwood notifications@github.com:
fwiw oracle has done this on internally, see
https://www.youtube.com/watch?v=c1ek1tFjhH8#t=18m55s for a slide
referring to this (there is also a q&a later in the talking confirming this
is reflink)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/405#issuecomment-346505559, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAGt3cT2bF6EfZPCd7t-jSjWlnGLN6D5ks5s5LczgaJpZM4AKZEt
.
Oddly he says it is reflink in linux. and seems to be saying that it is
there in Linux.
But does that mean it is available in openzfs? Or will be soon?
2017-11-22 19:18 GMT-08:00 Galt Barber galt@soe.ucsc.edu:
looks like the question about reflink is around 43m45s.
2017-11-22 16:07 GMT-08:00 Chris Wedgwood notifications@github.com:
fwiw oracle has done this on internally, see
https://www.youtube.com/watch?v=c1ek1tFjhH8#t=18m55s for a slide
referring to this (there is also a q&a later in the talking confirming this
is reflink)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/405#issuecomment-346505559,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAGt3cT2bF6EfZPCd7t-jSjWlnGLN6D5ks5s5LczgaJpZM4AKZEt
.
I guess it means that it IS only done internally, and not in OpenZFS yet.
Too bad.
2017-11-22 19:19 GMT-08:00 Galt Barber galt@soe.ucsc.edu:
Oddly he says it is reflink in linux. and seems to be saying that it is
there in Linux.
But does that mean it is available in openzfs? Or will be soon?2017-11-22 19:18 GMT-08:00 Galt Barber galt@soe.ucsc.edu:
looks like the question about reflink is around 43m45s.
2017-11-22 16:07 GMT-08:00 Chris Wedgwood notifications@github.com:
fwiw oracle has done this on internally, see
https://www.youtube.com/watch?v=c1ek1tFjhH8#t=18m55s for a slide
referring to this (there is also a q&a later in the talking confirming this
is reflink)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/zfsonlinux/zfs/issues/405#issuecomment-346505559,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAGt3cT2bF6EfZPCd7t-jSjWlnGLN6D5ks5s5LczgaJpZM4AKZEt
.
Bumping again.
With the semi-recent push by Ubuntu of Kubernetes based on LXD, which recommends a ZFS pool for performance, ZFS in general is getting a lot more attention. While the snapshot/dedup is useful when I spin a copy of a whole container file system, copying large files from inside the container to outside the container, or between containers is still as slow as any other file system, which is a significant end-user-visible benefit/change if the --reflink=always behavior were available for implementation into the LXD tools.
In my individual use case, we have automated build servers for CI that need to build many different variants of a very large code base. We clone once, and do basic setup, then we make many copies of the directory to run different variant builds on. We don't touch any of the existing files during the builds, we only add to them, but since disk IO is a major factor we can't use overlay-type mounting options. We also won't tie ourselves to only functioning on one file system, as would be necessary for implementing this as snapshots for each variant. The implementation that would be useful would be one that would be transparent through something like a default --reflink=always behavior on the cp command.
I think it's pretty clear that a --reflink=always from the cp command would be the most common use of COW for most users of ZFS if it were available, as this issue, the number of issues that have been combined into this one, and a simple web search for questions related to COW in file systems indicates.
Bumping again. Any news on the matter? Due of the CoW nature of ZFS, reflink is an obvious (but not necessarily trivial to implement) feature...
FYI, recent versions of ccache can use reflinks to cow-copy object files into and out of the object cache. Which is a pretty neat use case.
Now the syncing tool _syncthing_ is using cow-copy (https://docs.syncthing.net/advanced/folder-copyrangemethod.html)
syncthing is about xfs and other os but not zfs.
I have RedHat 8 at home, and the default install used xfs so I could test this, and it works just fine:
cp --reflink=always testReflink1 testReflink2
It copies quickly without duplicating any data blocks (COW), which is cool!
But it it not zfs.
I use reflinks pretty heavily on btrfs on my workstation machines to create lightweight temporary backups at the directory level and (for lack of a better word/concept here) to mimic overlay functionality without all of the overhead of setting up an actual overlay.
There are times this would be extremely handy on my ZFS datasets and I'd love to make use of it.
9 years, nothing done.
Here we go...
Please, can someone explain why btrfs implemented this feature, but zfs didn't?
What's so hard about it? I don't know much about ZFS' internals.
See Pawel Dawidek's presentation this week at the OpenZFS Summit:
https://openzfs.org/wiki/OpenZFS_Developer_Summit_2020_talks#File_Cloning_with_Block_Reference_Table_.28Pawel_Dawidek.29
@Logarithmus are you paying someone to develop or support OpenZFS for you? If not, I'm not sure why you think it's okay to complain about a missing feature in a project you get to use for free. If yes, then you should talk to your vendor, not upstream.
@Logarithmus are you paying someone to develop or support OpenZFS for
you? If not, I'm not sure why you think it's okay to complain about a
missing feature in a project you get to use for free. If yes, then
you should talk to your vendor, not upstream.
I'm sorry if my previous message looks offensive for you. I complain
about it because AFAIK for 9 years nobody even tried to implement this
feature. This looks kinda surprising for me, as reflinks support is a
useful feature in certain scenarios.
Several months ago I finally got spare time to migrate my laptop from
EXT4 to more modern filesystem, with snapshots, encryption and
compression. It was hard to decide between BTRFS and ZFS. Eventually
I've chosen ZFS, because they say it's more stable than BTRFS, and I've
read those horror stories about BTRFS loosing your data and failing to
mount. I knew that both ZFS and BTRFS are CoW filesystem, and thus
reflinks should work on both filesystems.
But yesterday I discovered the bitter truth after running cp --
reflink=always. I needed to copy some directory with huge subdirectory tree, and then overwrite
majority of files in it, so I didn't want to waste time for initial
copying. Sadly, it failed. Then I started searching and found this
issue. I was immediately amazed that the issue was open 9 years ago.Â
After that I found out that BTRFS has no problems with symlinks.
Moreover, it already has ZSTD compression support. ZFS now has such
support only in the prerelease. On the other hand, BTRFS doesn't have
native encryption. But as I understand (tell me if I'm wrong), one can
run BTRFS with compression inside LUKS, because in this case encryption
happens AFTER compression. It's a known fact that encryption BEFORE
compression makes compression irrelevant because of random-like nature
of encrypted data.
In conclusion, now I'm rethinking my choice of ZFS. I think that with
regular backups the chance of loosing data with BTRFS goes close to
zero.
P. S. Can someone recommend me some comprehensive and up-to-date
comparison of ZFS vs BTRFS?
@Logarithmus ... but yesterday I discovered the bitter truth after running
cp -- reflink=always.
Yeah, I understand: reflink is somewhat taken for granted from a modern filesystem. Sadly, as you discovered, it is not implemented in ZFS at the moment (and thanks to @adamdmoss for sharing the link above!).
I needed to copy some directory with huge subdirectory tree, and then overwrite majority of files in it, so I didn't want to waste time for initial copying.
Can I suggest using hardlink (cp -al) to pre-seed the destination directory, then deleting the to-be-overwritten files? If using rsync (without --inplace) you can (in most cases) even avoid the delete pass. Sure, this is not true substitution for reflinks (when partially overwriting an hardlinked files, all "copies" are overwritten), but it can be useful in a number of cases. _Full disclaimer: working with hardlinks can be confusing. If you don't know them well, just stop here and do a simple plain copy._
After that I found out that BTRFS has no problems with symlinks. Moreover, it already has ZSTD compression support. ZFS now has such support only in the prerelease.
Be aware that BTRFS compression often leads to much lower saving than ZFS: as compression is block-based and BTRFS uses 4K blocks, the compress ratio is significantly lower than on ZFS larger block size (128K by default). I would suggest you to do some direct comparison.
Can someone recommend me some comprehensive and up-to-date comparison of ZFS vs BTRFS?
I am not sure this is the right place to ask. However, simplifying as much as possible: BTRFS works ok for storing many rarely changing files (ie: fileserver duties, root and home partitions), but fall flat for any overwriting workloads (ie: databases, virtual machines, etc) where its performance really tanks.
See Pawel Dawidek's presentation this week at the OpenZFS Summit:
https://openzfs.org/wiki/OpenZFS_Developer_Summit_2020_talks#File_Cloning_with_Block_Reference_Table_.28Pawel_Dawidek.29
The video of this presentation is now available: https://youtu.be/WqJoytkNrt8?t=3043
Most helpful comment
The upstream OpenZFS repository is located at https://github.com/openzfs/openzfs. Features developed on Linux, FreeBSD, and Illumos are feed back upstream as appropriate to this repository. Each platform then pulls back down the changes they need.
Regarding reflink support this is something which has discussed at previous OpenZFS developer summits and several possible designs have been proposed. It's definitely doable, but we want to do it in a, efficient portable way which ideally all the platforms can take advantage of.