Xarray: Consistent naming for xarray's methods that apply functions

Created on 5 Feb 2017  路  12Comments  路  Source: pydata/xarray

We currently have two types of methods that take a function to apply to xarray objects:

  • pipe (on DataArray and Dataset): apply a function to this entire object (array.pipe(func) -> func(array))
  • apply (on Dataset and GroupBy): apply a function to each labeled object in this object (e.g., ds.apply(func) -> ds({k: func(v) for k, v in ds.data_vars.items()})).

And one more method that we want to add but isn't finalized yet -- currently named apply_ufunc:

  • Apply a function that acts on unlabeled (i.e., numpy) arrays to each array in the object

I'd like to have three distinct names that makes it clear what these methods do and how they are different. This has come up a few times recently, e.g., https://github.com/pydata/xarray/issues/1130

One proposal: rename apply to map, and then use apply only for methods that act on unlabeled arrays. This would require a deprecation cycle, but eventually it would let us add .apply methods for handling raw arrays to both Dataset and DataArray. (We could use a separate apply method from apply_ufunc to convert dim arguments to axis and not do automatic broadcasting.)

Most helpful comment

I don't think we should consider ourselves beholden to pandas's bad names, but we should definitely try to preserve backwards compatibility and interpretability for users.

Going back to Python itself:

  • apply(func, args, kwargs) (from Python 2.x) is equivalent to func(*args, **kwargs)
  • map() maps a function over each element of an iterable
  • functools.reduce() applies a binary function repeatedly to convert an iterable into a single element

For xarray, we need:

  1. a method for wrapping functions that work on unlabeled arrays
  2. a method for mapping functions over each element of a Dataset or grouped object.
  3. (possibly) a method for wrapping aggregation functions that act on unlabeled arrays

Currently, we call both (1) and (2) apply(), which is pretty confusing, and use reduce() for (3) even though it could potentially be a special case of (1) with a bit of extra magic and is quite unlike functools.reduce. In contrast, pandas calls both (1) and (2) apply() (using raw=True/raw=False to distinguish), and calls (3) aggregate or agg.

So long term, it could make sense to rename the current Dataset.apply()/GroupBy.apply() (case 2) to .map, and also rename .reduce() to the more generic .aggregate().

That said, I'm trying to imagine what the transition process for switching to new behavior for Dataset.apply looks like. We already will re-add dimensions to the output from calling functions in apply(), but at some point we have to a do a hard cut-off from passing DataArray objects to the function in apply to passing in a raw array.

I suppose we could do this by adding a raw keyword-only argument to .apply():

  • If raw=False (current default), we would raise a warning about changing behavior and would pass-on DataArray objects to the applied function. Users would be encouraged to use .map() instead.
  • If raw=True (future default behavior), we would pass in raw numpy/dask arrays to the future function.
  • The dim argument might only be supported with raw=True.

We would end up with an extra extraneous raw argument, which we could remove/deprecate at our leisure.

All 12 comments

Sounds good. It breaks the consistency with pandas' apply, but map is much more logical

Another option is to keep apply as-is for Dataset and GroupBy objects, but add a separate apply_raw method for applying functions that act on "raw" arrays. This would be a little more similar to pandas' apply with raw=True.

We could even do the raw=True keyword argument like pandas, but this is a little awkward because there are some additional arguments on apply_raw that don't make sense on apply (e.g., arguments that specify that some dimensions should be dropped or added).

I would be +1 for apply and apply_raw.

One proposal: rename apply to map

Would we accept this? I'd be up for doing the PR to deprecate apply and introduce map. It makes a Dataset more consistent with a standard mapping interface. But it would be inconsistent with pandas and a rename of fairly widely used method

One proposal: rename apply to map

-1 for pandas incompatibility.

I would like to rename rolling.reduce to rolling.apply to be consistent with pandas & groupby

I would like to rename rolling.reduce to rolling.apply to be consistent with pandas & groupby

+0.5 if map fails

I don't think we should consider ourselves beholden to pandas's bad names, but we should definitely try to preserve backwards compatibility and interpretability for users.

Going back to Python itself:

  • apply(func, args, kwargs) (from Python 2.x) is equivalent to func(*args, **kwargs)
  • map() maps a function over each element of an iterable
  • functools.reduce() applies a binary function repeatedly to convert an iterable into a single element

For xarray, we need:

  1. a method for wrapping functions that work on unlabeled arrays
  2. a method for mapping functions over each element of a Dataset or grouped object.
  3. (possibly) a method for wrapping aggregation functions that act on unlabeled arrays

Currently, we call both (1) and (2) apply(), which is pretty confusing, and use reduce() for (3) even though it could potentially be a special case of (1) with a bit of extra magic and is quite unlike functools.reduce. In contrast, pandas calls both (1) and (2) apply() (using raw=True/raw=False to distinguish), and calls (3) aggregate or agg.

So long term, it could make sense to rename the current Dataset.apply()/GroupBy.apply() (case 2) to .map, and also rename .reduce() to the more generic .aggregate().

That said, I'm trying to imagine what the transition process for switching to new behavior for Dataset.apply looks like. We already will re-add dimensions to the output from calling functions in apply(), but at some point we have to a do a hard cut-off from passing DataArray objects to the function in apply to passing in a raw array.

I suppose we could do this by adding a raw keyword-only argument to .apply():

  • If raw=False (current default), we would raise a warning about changing behavior and would pass-on DataArray objects to the applied function. Users would be encouraged to use .map() instead.
  • If raw=True (future default behavior), we would pass in raw numpy/dask arrays to the future function.
  • The dim argument might only be supported with raw=True.

We would end up with an extra extraneous raw argument, which we could remove/deprecate at our leisure.

I put the change for Dataset.apply -> Dataset.map in. Should we do the same for GroupBy?

I think those are probably the two easiest decisions to make (and hopefully will kick off moving this issue forwards)

Edit: the reason I hesitated for GroupBy is that it's not exactly the same: the object returned isn't the same (i.e. Dataset.map returns a Dataset, while GroupBy.map would return a Dataset)

+@dcherian

@max-sixty thanks for pushing this along!

I think I'm coming to appreciate backwards compatibility as an important consideration more and more these days. It's just really painful to reuse methods for something entirely different.

This makes me lean towards adding separate apply_raw() methods. The name is definitely less memorable than apply, but on the other hand it is also definitely easier to guess the difference between apply/apply_raw then apply/map.

OK. Does that inform your view on map vs apply?

I more strongly think that apply is a confusing and non-standard term for "run this function on _each item_ in this container", even if it's pandas' name.

I'm keener to offer map an as option that necessarily reusing apply.

What are your thoughts re:

  • Adding map as the documented approach for run-on-each on Dataset & GroupBy
  • Adding apply_raw (or similar) as a new function that runs functions on the 'raw' arrays
  • Keeping apply around for backward-compat, similar to the drop case

I more strongly think that apply is a confusing and non-standard term for "run this function on _each item_ in this container", even if it's pandas' name.

This is a fair point. map is definitely the standard name in the context of "map reduce" type operations.

What are your thoughts re:

  • Adding map as the documented approach for run-on-each on Dataset & GroupBy
  • Adding apply_raw (or similar) as a new function that runs functions on the 'raw' arrays
  • Keeping apply around for backward-compat, similar to the drop case

I would support this.

@max-sixty I like your proposal!

Was this page helpful?
0 / 5 - 0 ratings