Pandas: Cannot tell usecols to ignore missing columns

Created on 4 Apr 2019  路  6Comments  路  Source: pandas-dev/pandas

Code Sample

import pandas as pd
# Where example.csv is:
# column1,column2
# 1, 2

pd.read_csv('example.csv', usecols=['column1', 'column2', ' column3'])

Problem description

When specifying usecols to reduce the amount of data loaded, read_csv fails if the columns do not exist. This is not always desired, especially when reading a large number of files that may have varying columns.

There should be an option to suppress this and allow usecols to cut-down columns without enforcing their presence.

Current Output

ValueError: Usecols do not match columns in file, columns expected but not found: ['column3']

Expected Output

No error thrown where only some of the usecols exist.

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.7.1.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 142 Stepping 10, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None

pandas: 0.23.4
pytest: 4.0.2
pip: 18.1
setuptools: 40.6.3
Cython: 0.29.2
numpy: 1.15.4
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: 1.8.2
patsy: 0.5.1
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.8
feather: None
matplotlib: 3.0.2
openpyxl: 2.5.12
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.2
lxml: 4.2.5
bs4: 4.6.3
html5lib: 1.0.1
sqlalchemy: 1.2.15
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

IO CSV Usage Question

Most helpful comment

See the example here
http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#column-and-index-locations-and-names

I do think a callable to usecols is the right way to handle this - read_csv already has a ton of params and this is relatively simple to customize exactly how you want.

In [3]: pd.read_csv(StringIO("column1,column2\n1,2"), 
                    usecols=lambda c: c in {'column1', 'column2', 'column3'})
Out[3]:
   column1  column2
0        1        2

All 6 comments

update the issue with a show_versions() and an actual example

update the issue with a show_versions() and an actual example

Done

see the docs: http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files

you can pass a callable to usecols, IIRC @gfyoung we have an example of this somewhere?

See the example here
http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#column-and-index-locations-and-names

I do think a callable to usecols is the right way to handle this - read_csv already has a ton of params and this is relatively simple to customize exactly how you want.

In [3]: pd.read_csv(StringIO("column1,column2\n1,2"), 
                    usecols=lambda c: c in {'column1', 'column2', 'column3'})
Out[3]:
   column1  column2
0        1        2

A little late to the party here, but I agree with @chris-b1 on this. We have an example here:

https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#filtering-columns-usecols

Ah validate_usecols is only applied when usecols_dtype == 'string'. Now I think about it that makes sense. Thank you.

It would have been helpful to read this somewhere - hopefully this now provides a reference.

Was this page helpful?
0 / 5 - 0 ratings