Pandas: Pandas : to_csv() got an unexpected keyword argument

Created on 28 Apr 2017  路  5Comments  路  Source: pandas-dev/pandas

df.to_csv('transactions.x', header=False, doublequote=False)

While I am trying to use some of the parameters in dataframe to_csv function, it throws an TypeError, such as TypeError: to_csv() got an unexpected keyword argument 'doublequote'

My pandas version is 0.19.2 (Checked with print(pd.__version__)) I am using Python 3.5

The following official document is based on 0.19.2. Although, I am having type errors, it is stated that these parameters can be used as an optional. http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html

Do you guys have any idea about it?

Thank you.

Output of pd.show_versions()


pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 35.0.1
Cython: None
numpy: 1.12.1
scipy: 0.19.0
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: 1.2.0
tables: 3.4.2
numexpr: 2.6.2
matplotlib: None
openpyxl: None
xlrd: 1.0.0
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None

Usage Question

Most helpful comment

Would df happen to be a Series instead of a DataFrame? Their writers aren't 100% equivalent.

All 5 comments

Would df happen to be a Series instead of a DataFrame? Their writers aren't 100% equivalent.

Actually,
After I execute the following command, df = pd.read_sql (......, connection)
I can call df.to_csv('transactions.x', header=False, doublequote=False) successfully because the result is a dataframe object,
but when I continue to do something with df object such as
df = df.groupby(['Transactions'])['Items'].apply(','.join), the command df.to_csv('transactions.x', header=False, doublequote=False) becomes invalidated. So, how to cast the apply function to the dataframe in order to use these optional parameters?
Thank you.

Either make your slice ['Items'] return a dataframe with [['Items']] or convert the series to a DataFrame with ['Item'].to_frame()

By typing following command df = df.groupby(['Transactions'])[['Items']].apply(','.join) throws the same error, on the other hand if I want to use df = df.groupby(['Transactions'])[['Items']].apply(','.join).to_frame(), it does not throw error but output is showing the "Items" instead of real items such as a, b, c, ....
If i use the last option, df = df.groupby(['Transactions'])['Items'].apply(','.join).to_frame(), it thows the following error _csv.Error: need to escape, but no escapechar set

After using following the command df = df.groupby(['Transactions'])['Items'].apply(','.join), dataframe becomes series.

In order to cast series to dataframe, this command df = df.groupby(['Transactions'])['Items'].apply(','.join).to_frame() should be used instead.

Finally, to export it as a CSV with non-quote style by avoiding escape char, you need to end up with the following command df.to_csv('transactions.x', header=False, quoting=csv.QUOTE_NONE, escapechar=' ') #or whatever escapechar.

Hopefully, it helps for everyone. Thanks

Was this page helpful?
0 / 5 - 0 ratings