xref #8309
for example:
pd.qcut([1,1,1,1,1,1,1,1,1,1,1,1,1,5,5,5], [0.00001, 0.5])
will raise "ValueError: Bin edges must be unique: array([ 1., 1.])" exception
Fix suggestion - add one new line:
def qcut(x, q, labels=None, retbins=False, precision=3):
if com.is_integer(q):
quantiles = np.linspace(0, 1, q + 1)
else:
quantiles = q
bins = algos.quantile(x, quantiles)
--->bins = np.unique(bins)
return _bins_to_cuts(x, bins, labels=labels, retbins=retbins,
precision=precision, include_lowest=True)
why would you not just catch the ValueError
? better to be informed of the issue, right?
I think this should not throw exception as the is legitimate use.
maybe a better example:
pd.qcut([1,2,3,4,4], [0, .25, .5, .75, 1.])
also throw: ValueError: Bin edges must be unique: array([1, 2, 3, 4, 4], dtype=int64)
commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.14.0
nose: 1.3.0
Cython: 0.19.1
numpy: 1.8.1
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 1.0.0
sphinx: 1.1.3
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.4
bottleneck: 0.8.0
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.3.1
openpyxl: 1.8.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.5
lxml: 3.3.5
bs4: 4.3.1
html5lib: None
bq: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.6
pymysql: None
psycopg2: None
maybe you misunderstand me. This is a user error (in that you generally pass unique bins, and you CAN make them unique before passing them in). Yes it could be done in the function, but wouldn't you as a user want to know that you are passing non-unique bins?
The issue is regarding the pd.qcut() function (not pd.cut()). It should get array of samples and quantiles/ranks.
In the prev example, I passed array of samples [1,2,3,4,4] and would like it to calculate for each sample in that array to which of the 4 quantiles it belongs. (quantile 1,2,3 or 4)
its the same issue. If you uniqfy in one, you would in the other. That's not the question though. _why_ should this happen silently? (automatically). maybe I am missing something.
@jseabold ?
I would expect this to raise an informative error to the user. "The quantiles you selected do not result in unique bins. Try selecting fewer quantiles." I.e., if your 25th quantile is the same as your median, then you should be told this and you shouldn't ask for both. Trying to get fancy under the hood is asking for confusion IMO.
@jseabold ok, I can buy that this could produce a better msg.
@yonil7 want to take this on with a pull-request?
In addition to just raising a more informative error, I think that there should be an option to automatically handle duplicates in the following way:
Suppose we have a list with too many duplicates, say we want to split [1,2,3,3,3,3,3,3,4,5,6,7] into quartiles. Right now qcut fails, because the second-lowest quartile consists entirely of '3's, duplicating the bin edges. But there is a natural way to assign the quartiles if we allow duplicate values to be split among different bins: [1,2,3], [3,3,3], [3,3,4], [5,6,7].
Now this is not completely ideal -- the assignment is arbitrary, and there is no way of knowing what to do by looking at the bin edges alone. But in the real world it can be convenient to have an option that 'just works', and I think this warrants a 'split_dup=True' option that is false by default.
What do people think? I'll add this if people support it.
@edjoesu start with the error msg. If you want to propose a new issue for discussion that is ok. I think if you show several test cases, some of which you CAN distinguish and some of which have to raise then prob ok, but needs discussion.
Is there any work around for this until it's changed?
this is not a bug, just a more informative error message.
Got it, I thought it was being changed to a warning instead of an error. I personally would like the option to allow non-unique bins, but I also understand the case against it.
I just commented out those lines in tile.py and it seems to have allowed me to use duplicates.
Does anyone own this change? Or has it already been done?
See http://stackoverflow.com/questions/20158597/how-to-qcut-with-non-unique-bin-edges for a discussion about this.
@edjoesu I agree with your proposal, but I foresee that the implementation won't be trivial, since the way the current code assigns values to bins I think some bins would become empty if duplicates are allowed.
I think we could add a "duplicate_edges" parameter, with the following options:
bins = np.unique(bins)
line as in https://github.com/pydata/pandas/issues/7751#issue-37814702. This would result in less bins than specified, and some larger (with more elements) than others.Does it look reasonable? What do you think?
dukebody's suggestion looks good to me.
For my use case, I would prefer for qcut to just return the (non-unique) bin list, and let me handle it.
hi guys let me jump in and push for that to happen as well. Solutions already exist in many other languages, its just a matter of choosing one of them. See also https://stackoverflow.com/questions/38594277/how-to-compute-quantile-bins-in-pandas-when-there-are-ties-in-the-data/38596578?noredirect=1#comment64582414_38596578
see for instance here for a possible solution adopted in SAS
https://support.sas.com/documentation/cdl/en/proc/61895/HTML/default/viewer.htm#a000146840.htm
@randomgambit its not about a soln. This is quite straightforward. Its about someone actually implementing it. There are many many many issues. If someone wants to submit a PR that moves things WAY faster.
i get it jeff, no worries. unfortunately i dont have the python skills to help you and get the job done. I can only give hints for the good samaritan that wants to improve the current work in this direction
Most helpful comment
I think we could add a "duplicate_edges" parameter, with the following options:
bins = np.unique(bins)
line as in https://github.com/pydata/pandas/issues/7751#issue-37814702. This would result in less bins than specified, and some larger (with more elements) than others.Does it look reasonable? What do you think?