xref SO issue here
Im looking to set the rolling rank on a dataframe. Having posted, discussed and analysed the code it looks like the suggested way would be to use the pandas Series.rank function as an argument in rolling_apply. However on large datasets the performance is particularly poor. I have tried different implementations and using bottlenecks rank method orders of magnitude faster, but that only offers the average option for ties. It is also still some way off the performance of rolling_mean. I have previously implemented a rolling rank function which monitors changes on a moving window (in a similar way to algos.roll_mean I believe) rather that recalculating the rank from scratch on each window. Below is an example to highlight the performance, it should be possible to implement a rolling rank with comparable performance to rolling_mean.
python: 2.7.3
pandas: 0.15.2
scipy: 0.10.1
bottleneck: 0.7.0
`````` python
rollWindow = 240
df = pd.DataFrame(np.random.randn(100000,4), columns=list('ABCD'), index=pd.date_range('1/1/2000', periods=100000, freq='1H'))
df.iloc[-3:-1]['A'] = 7.5
df.iloc[-1]['A'] = 5.5
df["SER_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankOnSeries)
#28.9secs (allows competition/min ranking for ties)
df["SCIPY_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankSciPy)
#70.89secs (allows competition/min ranking for ties)
df["BNECK_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankBottleneck)
#3.64secs (only provides average ranking for ties)
df["ASRT_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankArgSort)
#3.56secs (only provides competition/min ranking for ties, not necessarily correct result)
df["MEAN"] = pd.rolling_mean(df['A'], window=rollWindow)
#0.008secs
def rollingRankOnSeries (array):
s = pd.Series(array)
return s.rank(method='min', ascending=False)[len(s)-1]
def rollingRankSciPy (array):
return array.size + 1 - sc.rankdata(array)[-1]
def rollingRankBottleneck (array):
return array.size + 1 - bd.rankdata(array)[-1]
def rollingRankArgSort (array):
return array.size - array.argsort().argsort()[-1]
```python
I think this is likely to be a common request for users looking to use pandas for analysis on large datasets and thought it would be a useful addition to the pandas moving statistics/moments suite?
``````
I have added it to the master issue. Its actuallly pretty straightforward to do this. Note that all that is necessary is a new method, rolling_idxmax, which is essentially rolling_max but it tracks the index as well; so all that is needed is a modification to algos.roll_max2 to also record the index.
want to do a pull-request here?
Yes that sounds like what I was thinking, pandas.stats.moments.rolling_rank with the same arguments as rank, allowing the user to select a competition method (min, avg....) and whether ranks should be ascending. Not sure I'm familiar enough with the pandas code yet to make any modifications my self to implement the best solution in the environment for this though. Happy to test though.
I got rolling_indmax and rolling_indxmin working just as @jreback said, recording the index. No extra memory allocation or computation on top of the normal algorithm, just an additional return array from roll_max2/roll_min2 holding the indices. At the PyCon sprint @jreback suggested I return an int64 from the Cython code to store the indices, but I'm pretty sure it has to be a float64 due to possible NaN's, right?
Also, @PH82 requested rolling_idxaverage but is this really necessary?
@cing you can use -1 to mark missing values values as long as you only use positive indices otherwise (that's pretty standard in pandas)
@cing, The average I mention is to determine how to rank ties. Common methods for how to rank ties are:
| Values | Rank (Tie Method=Min) | Rank (Tie Method=Max) | Rank (Tie Method=Avg) |
| --- | --- | --- | --- |
| 3 | 4 | 4 | 4 |
| 1 | 1 | 1 | 1 |
| 2 | _2_ | _3_ | _2.5_ |
| 4 | _5_ | _5_ | _5_ |
| 2 | _2_ | _3_ | _2.5_ |
I'll have to take a look at this again, some of the test cases didn't succeed for rolling_idxmax and rolling_idxmin. I didn't implement a rolling_idxavg though, is that truly useful?
I would say so, for my immediate purposes no, but it is a valid tie method
and quite likely that it would be used in the future, in fact I can think
of scenarios where I would use it. The rolling_rank I think should
implement as much of the non-rolling method as possible.
I am glad to see there is an issue ticket already, for this rolling rank.
May I know the status of this issue? How could I help on this?
@Dmoonleo the labels indicate the status, meaning that its open and unclaimed. you are welcome to submit a pull-request.
Is this going to be implemented?
@mcherkassky you're welcome take take a crack at it. Let us know if you need help getting started.
I still have the code I wrote back from the good ol' days, but it's worth taking a fresh look because window.pyx has been touched a few times. As a novice, I'd just comment that the windowed rolling algorithm is a bit more complicated than the "novice" tag might suggest! Nothing teamwork can't crack though.
does anyone have an algorithm reference to implement the rolling rank? it doesn't seem trivial
i'm happy to give it a stab but i don't really follow the discussion above about rolling_idxmax/rolling_idxmin and how that helps get the rolling rank
Is there anyone still working on the efficient implementation of rolling rank?
no would love to have this!
I propose an algorithm to calculate rolling_rank efficiently.
Suppose window size is fixed, and rank is defined when the window number is sorted in monotone increasing.
We can use a balanced tree to store window data, as it only takes O(logM) for insert, delete and finding operations, where M is the window size.
In the procedure of insert, delete operation, we should maintain a field called size of each node representing the count of nodes of this sub-tree, which will be used in calculating rank. If a balanced tree maintain the size field initially (such as Size-Balanced Tree or Weight Balanced Tree), it would be better.
Due to numbers are organized orderly in a balanced tree, when we want to get the rank of a number in this window, we can just find the corresponding node starting from the tree root, and sum the size of all the left child tree through the finding path. The sum is the rank. This operation is also O(logM).
From the above, when calculating a rolling rank in a length N sequence where window size is M, the total time complexity is O(NlogM). It's much better than that of the naive algorithm (sort and get the rank in each window) O(NMlogM). This algorithm could be M times fast than the original one.
If there is still interest, my workaround for this is:
import bottleneck as bk
norm_rank = bk.move_rank(x.values, n, axis=0)
denorm = (((norm_rank + 1) / 2) * (n - 1)) + 1
descend = (n - denorm) + 1
The bk.move_rank function returns a normalized rank between -1 and 1. So taking the normalized rank and reverse engineering it to return the actual rank. Then essentially making it descending=True. Obviously the only potential downside is it only provides average ranking for ties.
Running it on my small laptop:
window = 240
x = pd.DataFrame(np.random.randn(100000,4), columns=list('ABCD'), index=pd.date_range('1/1/2000', periods=100000, freq='1H'))
# Original rollingRankBottleneck above
6.04 s 卤 302 ms per loop (mean 卤 std. dev. of 7 runs, 1 loop each)
# This version
411 ms 卤 24.5 ms per loop (mean 卤 std. dev. of 7 runs, 1 loop each)
In case it's useful I'm using it like this:
def rank_last_value(x, shift= 2, pct=False):
"""
One step of the procedure to get the rolling rank of the last value
according to the previous window of values.
Use the bottlneck routine below instead. It is much faster. Keep this here for testing.
Use with .rolling(window=100).apply(rank_last_value, raw=True) for example
"""
args = np.argsort(x)
rank = np.argwhere(args == (x.shape[0] - 1))[0][0]
if pct:
return (rank + 1)/ (x.shape[0] + shift)
return rank
def rolling_rank(x, window, pct=False, min_prob=None):
"""
Get the rolling rank of the last value according to the previous window of values.
"""
# https://github.com/pandas-dev/pandas/issues/9481
import bottleneck as bk
norm_rank = bk.move_rank(x, window, axis=0) # [-1, 1]
u = (norm_rank + 1) / 2 # [0, 1]
if pct:
if min_prob is None:
min_prob = 1 / (window + 1)
return u * (1 - 2 * min_prob) + min_prob # [min_prob, 1 - min_prob]
rank = u * (window - 1)
return np.round(rank)
def _test_rolling_rank_against_rank_last_value():
import pandas as pd
x = np.random.randn(1000)
window = 30
aa = pd.Series(x).rolling(window).apply(rank_last_value, raw=True).values
bb = rolling_rank(x, window=window)
assert np.allclose(aa, bb, equal_nan=True)
I implemented it.
Computational complexity( n: input length w: rolling window size )
@contribu thanks for the implementaton. Ideally this would port almost directly to cython and embedded in the current infrastructure. we don't have very much c++ code in pandas and mostly use cython. if you could do this would be fantastic.
Most helpful comment
If there is still interest, my workaround for this is:
The
bk.move_rankfunction returns a normalized rank between -1 and 1. So taking the normalized rank and reverse engineering it to return the actual rank. Then essentially making it descending=True. Obviously the only potential downside is it only provides average ranking for ties.Running it on my small laptop: