I am trying to read an excel file that has a column (called "raster") of numbers with a leading apostrophe (so that they may be interpreted as text by Excel) since this is one common way to maintain leading zeros for numbers. The numbers need to be always 6 digits long. Additionally some of the values in this column are missing.
The file I am using for this example can be found here.
df = pd.read_excel("test.xlsx",
names=["raster", "benennung"],
sheet_name="Tabelle1",
)
print(df)
print(df.dtypes)
This returns:
raster benennung
0 20099.0 Test
1 20099.0 Test 2
2 NaN Test 3
raster float64
benennung object
dtype: object
When I read it without any explicit datatype declaration, the column is read with object type float64 as can be seen above and as a result leading zeros disappear. Next, when I use the fillna function to replace the NaN values and use a string, the column becomes object datatype to take this into account (as far as I understood).
df.raster = df.raster.fillna("999999")
print(df.raster)
This returns:
0 20099
1 20099
2 999999
Name: raster, dtype: object
Assuming that the column is now of type object (i.e. string), I go on to do the padding to make them back to 6 digits:
print(df.raster.str.pad(6, side="left", fillchar="0"))
This returns:
0 NaN
1 NaN
2 999999
Name: raster, dtype: object
This is the unexpected result for me.
I have intentionally not made the changes permanent (hence the print in the same line as pad).
This makes me realize that the numbers had really not been converted to strings when I replaced the NaNs with "999999" since when I try this:
print(df.raster.astype(str))
This returns another representation of the column when explicitly converted to string (and I have tested this works reliably as string later on too i.e. with padding etc.) :
0 20099.0
1 20099.0
2 999999
Name: raster, dtype: object
Bottomline: I know I could have avoided this trouble by explicitly defining datatypes at the start but since I forgot to do that and then ran into this strange behavior, I thought it is worth mentioning here. Whatever makes pandas better makes me happy since I personally like working with pandas a lottt.
No comment on why the initial read_excel read in floats instead of strings; I don't know if that's behaving correctly or not (maybe @chris-b1 knows).
As for the rest, it's three problems, any of which would fix things
df.fillna('99999') fills the missing values. But we use object dtype, which can contain a mix of strings and floatsIn [15]: df.fillna("999999")
Out[15]:
A
0 20099
1 999999
In [16]: df.values
Out[16]:
array([[ 20099.],
[ nan]])
We have issues for the first two. Not sure about the third. I'm not sure if Out[15] should print that like an integer or not.
Same issue as #11331, but that got closed so we can use this one.
In parsing CSV unless told otherwise we always try to parse as numeric - that behavior carried over to Excel parser. I agree that by default it would make sense to respect the Excel metadata and interpret, e.g., '020099 as text.
@chris-b1 I think that'd be a very helpful and reliably sensible addition when reading excel files.
I'd love to contribute (if I can manage to do that)...