import pandas as pd
import random
str_values = ['a', 'b']
data = [[random.choice(str_values), random.choice(str_values),
random.randint(0, 10), random.randint(1, 20), random.random()] for _ in range(100)]
df = pd.DataFrame(data, columns=['s0', 's1', 'int0', 'int1', 'float0'])
group_index_true = df.groupby(['s0', 's1'], as_index=True)
print(group_index_true.mean())
print(group_index_true.std())
group_index_false = df.groupby(['s0', 's1'], as_index=False)
print(group_index_false.mean())
print(group_index_false.std())
The last line gives an AtributeError
Traceback (most recent call last):
File "pandasbug.py", line 17, in <module>
print(group_index_false.std())
File "/home/user/pyenv/lib/python3.5/site-packages/pandas/core/groupby.py", line 1083, in std
return np.sqrt(self.var(ddof=ddof, **kwargs))
AttributeError: 'str' object has no attribute 'sqrt'
If the as_index
is set to False
in the groupby
operation, the sqrt is applied to all the columns, even to the columns that where used to group, which raises an error when at least one column is not numeric.
This error is related to issues: #11300, #14547, #11507, #10355
int0 int1 float0
s0 s1
a a 4.440000 11.080000 0.498588
b 4.352941 11.941176 0.557430
b a 5.619048 10.190476 0.442739
b 4.864865 11.648649 0.522814
int0 int1 float0
s0 s1
a a 3.176476 4.864155 0.274068
b 3.936070 4.955686 0.260301
b a 3.138092 6.384505 0.341268
b 3.172470 6.051704 0.260665
s0 s1 int0 int1 float0
0 a a 4.440000 11.080000 0.498588
1 a b 4.352941 11.941176 0.557430
2 b a 5.619048 10.190476 0.442739
3 b b 4.864865 11.648649 0.522814
s0 s1 int0 int1 float0
0 a a 3.176476 4.864155 0.274068
1 a b 3.936070 4.955686 0.260301
2 b a 3.138092 6.384505 0.341268
3 b b 3.172470 6.051704 0.260665
pd.show_versions()
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-81-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: es_ES.UTF-8
LOCALE: es_ES.UTF-8
pandas: 0.20.2
pytest: None
pip: 9.0.1
setuptools: 36.0.1
Cython: 0.25.1
numpy: 1.13.0
scipy: 0.18.1
xarray: None
IPython: 5.1.0
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: 1.2.0
tables: None
numexpr: None
feather: None
matplotlib: 1.5.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.4.1
html5lib: 0.999
sqlalchemy: 1.1.5
pymysql: None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: 2.8
s3fs: None
pandas_gbq: None
pandas_datareader: None
@indonoso As you already linked to those issues, I think this is a duplicate of https://github.com/pandas-dev/pandas/issues/10355 (with the only difference here is that the "modifying key column" result in an error because it are strings, but the cause is the same).
Therefore I am going to close this issue (and link to it in #10355). But, if you would be interested, very welcome to do a PR to fix this! There were already a few PRs that tried this (but the submitter didn't respond anymore), so probably you can start from that work.
You probably know, but as reference, an easy work-around for now is not using as_index=False
and reset_index
afterwards:
In [184]: group_index_true.std().reset_index()
Out[184]:
s0 s1 int0 int1 float0
0 a a 2.961578 6.205124 0.278510
1 a b 3.544489 5.848170 0.265557
2 b a 3.020564 4.514527 0.212534
3 b b 2.873397 6.408684 0.286971
Thanks jorisvandenbossche, your suggestion worked for one of my friend who was having this particular issue.
Hi,
i know this case is closed, but as i looked for solution for the same issue i found another work around that i found works better for me:
i apply the groupby with as_index=True, and then create a dataframe with 2 columns, 1 of the groupby index, and 1 for groupby values.
i then merged this dataframe to my main dataframe on the column i used as index for the groupby dataframe.
i hope its usefull :)
here is the code i used:
x=df_temp[['key2','mean']].groupby(['key2'],as_index=True)['mean']
df_temp1=pd.DataFrame({'key2':x.std(ddof=1).index,'Precision':x.std(ddof=1).values})
df_temp=pd.merge(df_temp, df_temp1,on='key2')
Most helpful comment
You probably know, but as reference, an easy work-around for now is not using
as_index=False
andreset_index
afterwards: