Pandas: astype(str) / astype_unicode: np.nan converted to "nan" (checknull, skipna)

Created on 17 Feb 2019  路  6Comments  路  Source: pandas-dev/pandas

Code Sample

>>> import pandas as pd
>>> import numpy as np
>>> pd.Series(["foo",np.nan]).astype(str)

Output

0 foo
1 nan # nan string
dtype: object

Expected output

0 foo
1 NaN # np.nan
dtype: object

Problem description

Upon converting this Series I would expect np.nan to remain np.nan but instead it is casted to string "nan". Maybe I'm alone in this and you'd actually expect that (I can't see a realistic use case for these string "nan" but well...).
So I could figure out than upon using the code sample the Series' values are processed through astype_unicode in pandas._libs.lib.

There is a skipna argument in astype_unicode and I thought it would get passed along when using pd.Series.astype(str,skipna = True) but it does not. The docstring of pd.Series.astype does not mention skipna explicitely but mentions kwargs so I tried doing this while printing skipna in astype_unicode:

pd.Series(["foo",np.nan]).astype(str, skipna = True)

Skipna stayed to the default value False in astype_unicode so it does not get passed along.

However when using astype_unicode directly setting skipna to True will not change the output of the the code sample anyways because checknull does not seem to work properly.
You can test that by printing the result of checknull in lib.pyx as I did here:
https://github.com/ThibTrip/pandas/commit/4a5c8397304e3026456d864fd5aeb7b8b9adca5f

Input

>>> from pandas._lib.lib import astype_unicode
>>> import numpy as np
>>> astype_unicode(np.array(["foo",np.nan]))

#### Output

foo is null? False
nan is null? False
array(['foo', 'nan'], dtype=object)

Expected output

foo is null? False
nan is null? True
array(['foo', NaN], dtype=object)

I tried patching it (so not a proper fix) using the code below but it din't work.
Also would be nice to be able to do pd.Series.astype(str,skipna = True). Whether skipna should then be True or False as default is another matter.

>>> if not (skipna and checknull(arr_i)):
>>>     if arr_i is not np.nan:
>>>         arr_i = unicode(arr_i)

All of this was done in a developper version I installed today (see details below). The only alteration is the code in my commit linked above.
Sorry if this has been referenced before I searched in various ways and could not find anything except a similar issue with pd.read_excel and dtype str:

https://github.com/nikoskaragiannakis/pandas/commit/694849da2654d832d5717adabf3fe4e1d5489d43

Also very sorry for the mess with the commits I got a bit confused during my investigation (also I did not get enough sleep). Is it possible to delete all but my last commit? The other ones are irrelevant.

Cheers Thibault

INSTALLED VERSIONS

commit: 4a5c8397304e3026456d864fd5aeb7b8b9adca5f
python: 3.7.2.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 69 Stepping 1, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None

pandas: 0.25.0.dev0+132.g4a5c83973.dirty
pytest: 4.2.0
pip: 19.0.1
setuptools: 40.7.3
Cython: 0.29.5
numpy: 1.15.4
scipy: 1.2.0
pyarrow: 0.11.1
xarray: 0.11.0
IPython: 7.2.0
sphinx: 1.8.4
patsy: 0.5.1
dateutil: 2.7.5
pytz: 2018.9
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.9
feather: None
matplotlib: 3.0.2
openpyxl: 2.6.0
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.2
lxml.etree: 4.3.1
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.2.17
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: 0.2.0
fastparquet: 0.2.1
pandas_gbq: None
pandas_datareader: None
gcsfs: None

Bug Missing-data Strings

Most helpful comment

I run the code recently. Passing skipna=True as kwargs works. Should we add the skipna=True in the intermediate function call?

In [9]: pd.__version__
Out[9]: '0.25.0+179.gc3d5f227f'

In [10]: ser = pd.Series(['foo', np.nan])

In [11]: ser
Out[11]: 
0    foo
1    NaN
dtype: object

In [12]: ser.astype(str, skipna=True)
Out[12]: 
0    foo
1    NaN
dtype: object

In [14]: np.isnan(ser.astype(str, skipna=True)[1])
Out[14]: True

All 6 comments

cc @jreback : Besides the Excel issue listed in the description, this seems to ring a bell...

I run the code recently. Passing skipna=True as kwargs works. Should we add the skipna=True in the intermediate function call?

In [9]: pd.__version__
Out[9]: '0.25.0+179.gc3d5f227f'

In [10]: ser = pd.Series(['foo', np.nan])

In [11]: ser
Out[11]: 
0    foo
1    NaN
dtype: object

In [12]: ser.astype(str, skipna=True)
Out[12]: 
0    foo
1    NaN
dtype: object

In [14]: np.isnan(ser.astype(str, skipna=True)[1])
Out[14]: True

This treatment of None/np.nan is quite unexpected.

I have found that this issue propagates into output of dataframes to csv / excel / clipboard, and boolean evaluation.

The logical function for astype(str) here would be to skip all NA types by default (None, np.nan, etc).

As an example:

pd.__version__
'0.24.2'
  • input as np.nan:
srs = pd.Series([np.nan,np.nan,5])
srs
srs.any()
pd.isna(srs)
srs.to_clipboard()
  • output as nan:
0    NaN
1    NaN
2    5.0
dtype: float64
True
0     True
1     True
2    False
dtype: bool
  • clipboard output as nan:
    | Index | Series_Out |
    |------- |------------ |
    | 0 | |
    | 1 | |
    | 2 | 5 |

  • input after astype(str) cast:

srs = srs.astype(str,skipna=True)
srs
srs.any()
pd.isna(srs)
srs.to_clipboard()
  • output after astype(str) cast:
0    nan
1    nan
2    5.0
dtype: object
'nan'
0    False
1    False
2    False
dtype: bool
  • clipboard output after astype(str) cast:
    | Index | Series_Out |
    |------- |------------ |
    | 0 | nan |
    | 1 | nan |
    | 2 | 5 |

The logical function for astype(str) here would be to skip all NA types by default (None, np.nan, etc).

pandas matches the behaviour of numpy

>>> import numpy as np
>>>
>>> np.__version__
'1.18.1'
>>>
>>> import pandas as pd
>>>
>>> pd.__version__
'1.1.0.dev0+1008.g60b0e9fbc'
>>>
>>> arr = np.array([None, np.nan])
>>> arr
array([None, nan], dtype=object)
>>>
>>> arr.astype(str)
array(['None', 'nan'], dtype='<U4')
>>>
>>> pd.Series(arr).astype(str).apply(type)
0    <class 'str'>
1    <class 'str'>
dtype: object
>>>
>>> arr = np.array([1, 2, np.nan], dtype="float")
>>> arr
array([ 1.,  2., nan])
>>>
>>> arr.astype(str)
array(['1.0', '2.0', 'nan'], dtype='<U32')
>>>
>>> pd.Series(arr).astype(str).apply(type)
0    <class 'str'>
1    <class 'str'>
2    <class 'str'>
dtype: object
>>>

So, is this considered a bug or not? I now have to workaround it in my current code. Annoying because when concatenating many columns together into a string, a la
ipv4_banners_pd["ssh_banner"] .astype(str).str.cat(ipv4_banners_pd["telnet_banner"], sep='_', na_rep="") .astype(str).str.cat(ipv4_banners_pd["snmp_banner"], sep='_', na_rep="")
the nan's are converted to "" in the latter two columns with the na_rep parameter, but they can't be converted in the first, which seems unintuitive.

yes this is an open bug
there were several attempts to patch it - see the PR refs

Was this page helpful?
0 / 5 - 0 ratings