I've noticed this new io_uring buzz going on, as it was released in Linux 5.1. They claim "up to two times performance gain on CEPH". It should improve multiqueue scalability for NVMe drives. Also the new API is supposed to be easier to use than currently used AIO API. So naturaly as enthusiastic ZFS user, i can't do better but ask what are the consequences for ZFS on Linux.
Is it likely that we will get some performance boost from this?
Is this hard to implement at ZoL side?
This is definitely an interface we'll want to investigate supporting.
Stumbled upon this today, has anyone given this a poke?
A few interesting thoughts - now that 5.4 long term is out and cooking, more people may feel comfortable getting on to a long term version of linux that has io_uring. Secondarily, Ubuntu 20.04 said that they're standardizing around 5.4.
The "buzz" @Harvie mentioned seems to still be buzzing
I think i still do not fully understand how io_uring works... Initialy i was thinking that io_uring would lay between ZFS and harddrive, but now somebody told me it is gonna lay between ZFS and userspace application, so i am bit confused. Do you know more about how this actualy works?
Is this actually relevant for ZFS? AFAIU io_uring is an interface between userspace applications and the kernel.
ZFS data paths between userspace and the kernel that I'm aware of are:
Ceph instead got huge performance gains because Ceph's ""backend"" is an userspace application which issues a lot of I/Os (see https://github.com/ceph/ceph/pull/27392). Hypotethically, if ZoL was not a native kernel module but still zfs-fuse, that would gain performance.
Most helpful comment
This is definitely an interface we'll want to investigate supporting.