Here's our roadmap document: https://docs.google.com/document/d/151ct8jcZWwh7XStptjbLsda6h2b3C0IuiH_hfZnUA58/edit#
Just because it is a nice round number :-)
Or maybe we can use it to discuss how we imagine a possible pandas 1.0 ..
Some clarification (from @shoyer): This is not the place to make new feature requests -- please continue to make separate GitHub issues for those. Almost every new feature can be added without a 1.0 release. If there is a change you think would be _necessary_ to do in pandas 1.0, feel free to reference issues where it is described in more detail.
My wish list for pandas 1.0:
[]
/__getitem__
(#9595)I also have a fantasy world where the pandas Index becomes entirely optional, but that might be too big of a break even for pandas 1.0.
I want to add:
Index
vs MultiIndex
API (#3268)What if, every pnl, df, s, had a mode, that changed the slicing/getitem behavior. One could set the default in the options, and change it on a per-object basis when necessary? It could allow old-new to transition smoother, plus, get more creative where desired.
@jnmclarty A better option would be some sort of flag that could be set per module, similar to a future statement -- changing the way in which a specific DataFrame is queried is just begging for someone to pass it off to an incompatible function. In fact, I just asked if this is possible on StackOverflow: http://stackoverflow.com/questions/29905278/using-future-style-imports-for-module-specific-features-in-python/
It would be nice if there was an option to have boxplot X axis labels match line plot's X axis labels.
@djchou is there an existing issue for that? If not, please make one :).
Congrats on the great package:D
My wish is:
dplyr like macros: https://github.com/dalejung/naginpy
A guy can wish...
I've been working on problems recently where having groupbys run in parallel would have been great (I think). Also map
s / apply
s.
ref #1907
These may be too small, but since this is a wishlist I would like to see some improvements in the consistency of the API. Some example:
index
/indexes
, column
/columns
, and level
/levels
. This includes both the names and whether they accept single values, multiple values, or both.axis
argument is available wherever operations are applied across along an axis.DataFrame
, cumsum
has a skip_na
argument, while diff
doesn't. fill_value
should be fillna
.DataFrame
we have sort_index
and sortlevel
, and is_copy
, isin
, and isnull
. For the record, I'm strongly -1 on @toddrjen's suggestion to rename methods to make the use of underscores more consistent. Even Python 3 didn't clean things up like that.
Integer columns with missing data support :)
xref #8643
Allow "statistics"l function like count, sum, mean, quantile etc to handle weighted data
@bwillers I added a xref to an existing issue where that was discussed
@benjello is there already a github issue for adding weights? If not, please make one :).
@bwillers @benjello The good news is that I don't think either of your suggestions require pandas 1.0. Both could be done incrementally.
@shoyer #2501 and #10030 are somehow about weights: should I open a new one ?
@benjello I think we can discuss this further at #10030. That issue is now only about the mean, but would be good to discuss there to which methods we would want to add this functionality.
I wasn't entirely sure where to put this but I've written up a short gist as an IPython notebook on the current state of MultiIndex
ing with DataFrames.loc
.
https://nbviewer.jupyter.org/gist/tgarc/6c40a65f648302b6b9d7#
What is particularly relevant to this discussion is in the last section. Specifically pandas allows,
df.loc[('foo','bar'), ('one','two'), ('three','four')]
(1)
To be taken to mean
df.loc[(('foo','bar'), ('one','two'), ('three','four')), :]
But this type of indexing is ambiguous in the case when the number of indexing tuples is 2 since
df.loc[('foo','bar'), ('one','two')]
could mean incomplete indexing as in
df.loc[(('foo','bar'), ('one','two')),:]
or row,column indexing. Currently, pandas just interprets this as row, column indexing when there are 2 indexing tuples.
My feeling is that the incomplete indexing as in (1) shouldn't be allowed for MultiIndex DataFrames because of the aforementioned ambiguity. I'm not sure whether changing this would break other code and hence whether it should be considered a change that should be held off until v1.0.
This comment and gist is also a summary of some of the discussion that I had with @shoyer and @jonathanrocher at the SciPy sprints.
This may or may not be a good idea, but it may at least be worth thinking about. Considering that PanelND
has always been marked as "experimental" and not all features support it, and considering the work that has been going on in xray, is PanelND
something that could be deprecated or dropped for 1.0?
@tgarc Nice overview notebook! (by the way, if you would like to submit parts of that to improve the docs, very welcome!)
Part of what you describe is also discussed here (collapsing index levels or not): #10552
For the allowing of 'incomplete' indexing on frames, there is already a warning in docs for this: http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers (the red warning box). So it is explicitly "allowed, although warned for because of possible ambiguities" (so not a bug in that sense).
But the question is indeed of this is a good idea. It is somewhat convenient that it works in the non-ambiguous cases, but maybe better to not allow this. If we want to discuss this more in detail, probably better to open a separate issue.
@toddrjen
@shoyer and I have had discussion about this. The proposal is to rename Xray
-> pandas-nd
. We can discuss further consolidation at some later point. I think we would then deprecate Panelnd (e.g. 4D and higher) and point to pandas-nd
. Their are a couple of API issues if we also did this for Panel
.
Mainly I think we would need some conversions, e.g. to_nd
as a mainline function.
@tgarc this was added quite a long time ago as a convenience / magic feature. It is specifically warned about and is a limitation of the python syntax.
There are times when it can be detected and other times it is ambiguous. I am not sure that we can do anything about it. If people don't read the docs what can you do.
@jorisvandenbossche Thanks, I'll look to see if there's an appropriate place to add documentation. Thanks for pointing me to that warning - I admit I didn't know it was there.
@jreback I realize that this is an established feature and that there is a warning about it in the docs but as we were discussing pulling back on the complexity of indexing in the future of pandas, modifying this particular feature seemed like a good opportunity to simplify existing code and restrict the number of ways users can do multi-indexing. I'll give this some more thought and potentially open as a new issue.
EDIT opened as issue #10574
This is pretty fundamental - probably too much so: changing .values
to .data
, so a DataFrame
can be more pythonic in its dict-like interface.
ref https://github.com/pydata/pandas/issues/12056
Pinging here on github as well, as I am not sure everybody is aware of the pandas-dev mailing list. But there is currently a thread started by Wes on a pandas 1.0 / future roadmap, and you are certainly welcome to also provide feedback or share ideas.
https://mail.python.org/pipermail/pandas-dev/2016-July/000512.html
cc @chris-b1 @gfyoung @MaximilianR @kawochen @janschulz
One other major breaking change to consider:
We should consider making arithmetic between a Series and a DataFrame broadcast across the _columns_ of the dataframe, i.e., aligning series.index
with df.index
, rather than the current behavior aligning series.index
with df.columns
.
I think this would be far more useful than the current behavior, because it's much more common to want to do arithmetic between a series and all columns of a DataFrame. This would make broadcasting in pandas inconsistent with NumPy, but I think that's OK for a library that focuses on 1D/2D rather than N-dimensional data structures.
Some questions for the next couple releases...
__getitem__
) till 2.0?Actually, that's really my only question. I guess the only followup would be "what falls into that bucket of large API-breaking changes that are actually feasible?"
I think now that 1.0 is upon us, we should refocus this issue from "wishlist" to "stuff that's actually going to happen for 1.0". As we go through issues prepping for 0.19, what's our policy on pushing issues' milestones? Do we push to "1.0" or "Someday"? I'd lean towards "Someday", and only change use 1.0 for stuff that's blockers.
Is the idea for 1.0 to stabilize the 0.x API, or to drop a handful of larger API-breaking changes? Or are we pushing the API-breaking changes (e.g. fixing getitem) till 2.0?
As it is now discussed on the pandas-dev mailing list, I think the conclusion is indeed how you state it here: 1.0 as a stabilization of the current 0.x API, and 2.0 with an internal refactor / larger API changes (eg getitem)
we should refocus this issue from "wishlist" to "stuff that's actually going to happen for 1.0"
I think what is discussed in this issue is actually what we now are discussing as changes for 2.0, so I would rather change the milestone, and open an new issue for things we want to do before 1.0
As we go through issues prepping for 0.19, what's our policy on pushing issues' milestones? Do we push to "1.0" or "Someday"? I'd lean towards "Someday", and only change use 1.0 for stuff that's blockers.
+1, there is also 'next major release', that is often used in the past to push issues to that are not included anymore in the current release. But indeed, I would not rename automatically all issues of 'next major release' to '1.0', but keep the '1.0' milestone to selectively add to issues that we regard as blockers for 1.0
here's why I have the tags set this way. We have approx 1000 issues under next major release
. This is really just a placeholder for things to do, that otherwise are not categorized as pie-in-sky Someday
.
The way things have been working is to pull issues off of this to a numbered release. IOW, when someone submits a pull-request I mark the issue. Then when the PR is actually merged it gets set with the version number. Otherwise you get a bunch of stale PR's that have version numbers and you have to then go back and manually unassign them.
Same thing with issues. Before I switched to this way (IIRC was 0.15 or 0.16). I would would have to manually go thru each each and reassign it (well, often did it in bulk, but the idea was to review open issues). The 'issue' is that we have a LOT of open issues. They are only semi-prioritized. Prioritizing is quite difficult as resources are not generally available (IOW, there aren't people to 'assign' issues, rather its the reverse, people 'assign' them to themselves).
So generally newish issues I would assign to the current version number, as time closes to the release, I would push newer issues to next major release
. Then would still review open issues that have a version number and push / request help.
This activity get's quickie bugs fixed, while allowing some semblance of 'newish' issues (IOW those that happened recently).
Of course if anyone has better suggestions on how to manage issues. speak up!
pandas has basically been operating in Kanban style since its beginning. Issues are marked as "on deck" (here: "next major release" -- perhaps we could give this a better name like "approved", "on deck", "fair game" -- some issues may be either pie-in-the-sky or have not yet reached consensus about the path forward) with potentially an additional level of prioritization (e.g. blocker)
It may be a good idea to start thinning down the 1.0 TODO list to things that absolutely must get done. We also need to figure out a procedure for maintaining both a 1.x maintenance branch as well as an unstable 2.0 development branch. I believe that the 2.0 branch can be made to cleanly rebase until the first cut of the internals (libpandas + wrapper classes) stabilizes (which will likely take on the order of months) and can begin to be integrated into pandas/core
. At some point a more serious divergence will have to take place, at which point "forward-porting" bug fixes may become complicated.
Proper units support would be a good thing for 1.0: #10349. I think @jreback's idea of using the dtype
is very organic and awesome.
IMHO, it is OK to break considerable backwards compatibility with a huge release, which in this case would be a culmination of lessons learned, feature additions, etc. There was no way all the current capabilities, and the pending feature requests, bug fixes and enhancements could have been planned for at the time of creation of pandas. Since so much has been bolted on with occasional API changes, as required, there are quite a few inconsistencies in implementation. 1.0 can be a way to organically build up all features from a single trunk. If you need my opinion, I am in favor of libpandas
, because I see it as a door to independent development in Python and other languages. You all are better at figuring this out though. Users can always freeze/force older versions in environments to avoid code breakage.
Now that there is an actual plan for 1.0
release (i.e. v.0.24
-> v.0.25
-> v.1.0
), some of this might be too ambitious, but essentially, those are all about consistency (or lack thereof) that I'd like to see in pandas 1.0:
groupby.apply
: #22545, #20420, #22541, #22542, #22546unique
: consistent (i.e. pandas can deal with its own types, both as class methods and as pd.unique
, and maintains the type of the caller), with possibility to return inverse, #22824, #21357, #4087I know that most people can't wait to finally have pandas 1.0, but IMO there are some very fundamental parts of the API that should still stabilize some more:
0.24
, the whole interaction with numpy
is starting to change - i.e. using .array
instead of .values
and explicitly calling .to_numpy()
to get an ndarray
. This will very likely need some further maturation.These three points concern some of the most fundamental aspects of the API surface, and leaving them muddy means it will be much harder to fix after 1.0, because many people will be shouting "SemVer!", whether that's the policy or not.
Going over the thread, there's also some very good points brought up that have not been addressed yet.
To be sure, there's been a lot of progress (EAs will have a huge impact for good), but even though I'm raining on the parade, I think it's a necessary discussion. At the very least, there needs to be clear communication what the policy for breaking changes & versioning is going to be post-1.0. - for example numpy
-style rolling deprecations, similar to the current MO?
I believe that SemVer would either lead to massive ossification, or alternatively, that the current minor releases (like 0.23
-> 0.24
) would always have to be major version bumps every ~6 months (which would be a valid choice too), at least for the foreseeable future.
would always have to be major version bumps every ~6 months (which would be a valid choice too)
if that was the expectation then would <year>.<month>.<patch>
versioning with a 6 month release cycle be more appropriate than semver?
Towards Pandas 20.1 FTW!
Is the Google Doc linked in the description currently the best publicly available Pandas roadmap? Or https://pandas-dev.github.io/pandas2/goals.html#id1? Or is it all so out of date that it's better to state that there currently isn't any roadmap?
https://github.com/pandas-dev/pandas/wiki/Pandas-Sprint-(July,-2018)#towards-pandas-10 is probably the most up to date, though there are already some inaccurate items.
0.24.0 was just released in January, so 0.25.0 will be a few months from now, and 1.0 sometime in the middle of the year (perhaps at SciPy?)
Thanks @TomAugspurger!
Probably worth referencing this PR adding a roadmap here: #27478
@jorisvandenbossche is there anything concrete in this issue that isn't recorded elsewhere? We'll need to re-title it soon :)
Is there anything here that's a blocker / nice-to-have for 1.0?
Shouldn't this issue be already resolved/obsolete by recent release of pandas 1.0.0?
Since pandas 1.0 has already been released, I think we are safe to close this issue. We may want to continue discussion on a new "Towards pandas 2.0" issue. Closing for now
Most helpful comment
My wish list for pandas 1.0:
[]
/__getitem__
(#9595)I also have a fantasy world where the pandas Index becomes entirely optional, but that might be too big of a break even for pandas 1.0.