xr.open_mfdataset('path/to/many/files*.nc', method='parallel')
We have many issues describing the less than stelar performance of open_mfdataset (e.g. #511, #893, #1385, #1788, #1823). The problem can be broken into three pieces: 1) open each file, 2) decode/preprocess each datasets, and 3) merge/combine/concat the collection of datasets. We can perform (1) and (2) in parallel (performance improvements to (3) would be a separate task). Lately, I'm finding that for large numbers of files, it can take many seconds to many minutes just to open all the files in a multi-file dataset of mine.
I'm proposing that we use something like dask.bag to parallelize steps (1) and (2). I've played around with this a bit and it "works" almost right out of the box, provided you are using the "autoclose=True" option. A concrete example:
We could change the line:
datasets = [open_dataset(p, **open_kwargs) for p in paths]
to
import dask.bag as db
paths_bag = db.from_sequence(paths)
datasets = paths_bag.map(open_dataset, **open_kwargs).compute()
I'm curious what others think of this idea and what the potential downfalls may be.
I think is definitely worth exploring and could potentially be a large win.
One potential challenge is global locking with HDF5. If opening many datasets is slow because much data needs to get read with HDF5, then multiple threads will not help -- you'll need to use multiple processes, e.g., with dask-distributed.
@shoyer - we can sidestep the global HDF lock if we use multiprocessing (or the distributed scheduler as you mentioned) and the autoclose option. This is the approach I took during my initial tests. It would be great if we could use the threading library too but that does seem less applicable given the current state of the HDF library.
For what's worth, this is exactly the workflow I use (https://github.com/OceansAus/cosima-cookbook) when opening a large number of netCDF files:
bag = dask.bag.from_sequence(ncfiles)
load_variable = lambda ncfile: xr.open_dataset(ncfile,
chunks=chunks,
decode_times=False)[variables]
bag = bag.map(load_variable)
dataarrays = bag.compute()
and then
dataarray = xr.concat(dataarrays,
dim='time', coords='all', )
and it appears to work well.
Code snippets from cosima-cookbook/cosima_cookbook/netcdf_index.py
@jmunroe - this is good to know. Have you been using the default scheduler (multiprocessing for dask.bag) or the distributed scheduler?
distributed
Most helpful comment
distributed