Pandas: read_csv: Rows with extra columns set data in index

Created on 24 Jan 2017  路  10Comments  路  Source: pandas-dev/pandas

Code Sample

from io import StringIO

import pandas as pd

# Notice that the second row has an extra column
data = """UNITID,XTUIT1,TUITION1 
100636,"R",0,
"""
f = StringIO(data)

df = pd.read_csv(f)
print(df)

Actual Output

    UNITID  XTUIT1  TUITION1
100636  R   0   

Problem description

This behavior shifts column names, where the first column of data no longer matches the first column name. It also sets the extra data as the index, leading to very unexpected results.

Expected Output

    UNITID  XTUIT1  TUITION1    Unnamed: 3
0   100636  R   0   

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Darwin
OS-release: 16.3.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8

pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 34.0.1
Cython: None
numpy: 1.12.0
scipy: None
statsmodels: None
xarray: None
IPython: 5.1.0
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: 2.0.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.9999999
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.4
boto: None
pandas_datareader: None

Docs IO CSV Usage Question

Most helpful comment

Hey @TomAugspurger, that works if you have a malformed file with delimiters at the end of each line (copy/paste from the docs). But if a row has multiple extra columns (two, for example), it will return a MultiIndex:

# Note the two extra columns
data = """UNITID,XTUIT1,TUITION1 
100636,"R",0,,
"""
f = StringIO(data)

df = pd.read_csv(f)
print(df)
        UNITID  XTUIT1  TUITION1 
100636  R   0       



md5-9d5a1b7e9cbf5e1f2f137a3b3e3c8b43



IndexError Traceback (most recent call last)
in ()
9 f = StringIO(data)
10
---> 11 df = pd.read_csv(f, index_col=False)
12 print(df.to_clipboard())

/path/to/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
644 skip_blank_lines=skip_blank_lines)
645
--> 646 return _read(filepath_or_buffer, kwds)
647
648 parser_f.__name__ = name

/path/to/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
399 return parser
400
--> 401 data = parser.read()
402 parser.close()
403 return data

/path/to/pandas/io/parsers.py in read(self, nrows)
937 raise ValueError('skipfooter not supported for iteration')
938
--> 939 ret = self._engine.read(nrows)
940
941 if self.options.get('as_recarray'):

/path/to/pandas/io/parsers.py in read(self, nrows)
1506 def read(self, nrows=None):
1507 try:
-> 1508 data = self._reader.read(nrows)
1509 except StopIteration:
1510 if self._first_chunk:

pandas/parser.pyx in pandas.parser.TextReader.read (pandas/parser.c:10415)()

pandas/parser.pyx in pandas.parser.TextReader._read_low_memory (pandas/parser.c:10691)()

pandas/parser.pyx in pandas.parser.TextReader._read_rows (pandas/parser.c:11728)()

pandas/parser.pyx in pandas.parser.TextReader._convert_column_data (pandas/parser.c:12805)()

pandas/parser.pyx in pandas.parser.TextReader._get_column_name (pandas/parser.c:17512)()

IndexError: list index out of range
```

All 10 comments

http://pandas-docs.github.io/pandas-docs-travis/io.html#index-columns-and-trailing-delimiters

So you want index_col=False

In [5]: data = """UNITID,XTUIT1,TUITION1 
   ...: 100636,"R",0,
   ...: """
   ...: f = StringIO(data)
   ...: 
   ...: df = pd.read_csv(f, index_col=False)

In [6]: df
Out[6]: 
   UNITID XTUIT1  TUITION1 
0  100636      R          0

IIRC this is to make it easier to read data from systems that don't label the "row names" column (like R).

Any suggestions on how to document this better?

Hey @TomAugspurger, that works if you have a malformed file with delimiters at the end of each line (copy/paste from the docs). But if a row has multiple extra columns (two, for example), it will return a MultiIndex:

# Note the two extra columns
data = """UNITID,XTUIT1,TUITION1 
100636,"R",0,,
"""
f = StringIO(data)

df = pd.read_csv(f)
print(df)
        UNITID  XTUIT1  TUITION1 
100636  R   0       



md5-9d5a1b7e9cbf5e1f2f137a3b3e3c8b43



IndexError Traceback (most recent call last)
in ()
9 f = StringIO(data)
10
---> 11 df = pd.read_csv(f, index_col=False)
12 print(df.to_clipboard())

/path/to/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
644 skip_blank_lines=skip_blank_lines)
645
--> 646 return _read(filepath_or_buffer, kwds)
647
648 parser_f.__name__ = name

/path/to/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
399 return parser
400
--> 401 data = parser.read()
402 parser.close()
403 return data

/path/to/pandas/io/parsers.py in read(self, nrows)
937 raise ValueError('skipfooter not supported for iteration')
938
--> 939 ret = self._engine.read(nrows)
940
941 if self.options.get('as_recarray'):

/path/to/pandas/io/parsers.py in read(self, nrows)
1506 def read(self, nrows=None):
1507 try:
-> 1508 data = self._reader.read(nrows)
1509 except StopIteration:
1510 if self._first_chunk:

pandas/parser.pyx in pandas.parser.TextReader.read (pandas/parser.c:10415)()

pandas/parser.pyx in pandas.parser.TextReader._read_low_memory (pandas/parser.c:10691)()

pandas/parser.pyx in pandas.parser.TextReader._read_rows (pandas/parser.c:11728)()

pandas/parser.pyx in pandas.parser.TextReader._convert_column_data (pandas/parser.c:12805)()

pandas/parser.pyx in pandas.parser.TextReader._get_column_name (pandas/parser.c:17512)()

IndexError: list index out of range
```

A workaround is to use usecols if you know how many columns there are (which you could also infer from the first row):

In [24]: pd.read_csv(StringIO(data), usecols=range(3))
Out[24]: 
   UNITID XTUIT1  TUITION1 
0  100636      R          0

Not sure if pd.read_csv(f, index_col=False) should work with multiple trailing comma's. But the error message is indeed not very helpful.

@jorisvandenbossche Thanks for the advice. Just using index_col=False solved my problem, though there's still a question of usability: Should read_csv() throw data into the index if there's extra columns or should it do something else?

I don't find the current behavior to be very user-friendly, but there may be a reason why it is/has to be this way that I don't know.

Here are some alternatives:

1) We error. While it does not throw data in the output, this would make the pandas API more strict in terms of the data it would accept. I think we try to be as accommodating as possible and attempt handle as much "malform-ness" as possible.

2) We throw the data in the columns. This could be troublesome, as we might throw in new data columns that the user would not expect. In the index, it is easier to "tuck away," especially when we write the data to a file since we can specify index=False.

Now does this mean it's correct? Not sure, but on the other hand, we do make this behavior very clear and even provide ways in which to circumvent it. However, if anyone has strong opinions about breaking the status quo, then speak by all means! :smile:

Personally (as a pandas user), I find this to be pretty unintuitive.
The documentation (http://pandas-docs.github.io/pandas-docs-travis/io.html#io-read-csv-table) also doesn't help make it very clear:

  1. First mention in the docs says "Error handling ... Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned."
  2. The bad lines section linked from the above just mentions use_cols
  3. Discovering the "Index columns and trailing delimiters" section when thinking about how to read bad files is not straightforward (there are 5600 words in the documentation on read_csv!)

OK it's easy to criticise :-/ Here's my suggestions:

  1. By default, reading CSV with unmatched column lengths would always raise an error, not just for lines after the first line, and that error would be consistent (e.g. ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4)
  2. Current behaviour can be invoked with a parameter (e.g. fill_headers=True) to make it easy to use with R data
  3. Completely rework the documentation. It's really hard to navigate even the tree view (compared to the other file reading sections). This would be a big task, but I think it's really worth considering (or maybe this is already considered in some other issue?). I'd have a go myself if there were any support for it.

This is an important area for improvement as read_csv is a key bit of functionality for many users as it is so powerful to be able to drop existing data sources into pandas.

1) All for fixing documentation if you find it confusing. Clarity about how pandas works is always appreciated.

Unfortunately, I will have to disagree with the rest of your proposal on the following grounds:

1) Data in the real world is never nice by default, so making pandas strict by default doesn't seem to make much sense for accessibility purposes.

2) Our API for read_csv is pretty heavy at this point, and I don't think we would be in favor of adding yet another keyword argument to the signature.

3) Correct me if I'm wrong, but your proposal seems to be more of a band-aid than an actual fix because you are sweeping away the current behavior as "non-default" - the best way to address this issue IMO is to address these cases gradually and come to a consensus on the behavior that we want to have when addressing malformed-rows.

Mind you that any such change will be a major API most likely, so we would need to tread lightly as we address these cases.

Yes it has a lot of keywords, another one doesn't help. But the current use of index_col=False to mean something completely different to index_col=None makes the API less accessible IMO, not more.

From the documentation I thought that the current behaviour was to make read_csv strict by default. e.g. error_bad_lines defaults to True. It's really weird to me that too many fields in one situation causes an error, yet in another situation is handled silently. But I suppose that's the price that other users pay so that R users can have it work out of the box :-( I still think that the error when index_col=False is specified should match the error_bad_lines one.

True, the documentation needs work, and some overhaul of the API will be needed eventually. Not sure if that can be done in the near future, since the code is so engrained at this point and will require some careful deconstruction and reconstruction to re-establish functionality, but good to note for future development.

The reason for discrepancy with regards to error_bad_lines is because we give more weight to the first line or two read from the CSV to help determine the column count in some cases. Thus, we are more lenient towards those columns (hence why we are not error-ing if you observe in the examples provided above).

A workaround is to use usecols if you know how many columns there are (which you could also infer from the first row):

In [24]: pd.read_csv(StringIO(data), usecols=range(3))
Out[24]: 
   UNITID XTUIT1  TUITION1 
0  100636      R          0

Not sure if pd.read_csv(f, index_col=False) should work with multiple trailing comma's. But the error message is indeed not very helpful.

This actually works for me thank you very much

Was this page helpful?
0 / 5 - 0 ratings

Related issues

songololo picture songololo  路  3Comments

nathanielatom picture nathanielatom  路  3Comments

amelio-vazquez-reina picture amelio-vazquez-reina  路  3Comments

venuktan picture venuktan  路  3Comments

MatzeB picture MatzeB  路  3Comments