Describe the bug
When converting a cudf DataFrame to a pandas DataFrame (using the to_pandas method), integer columns are converted to float when there is a null value.
Steps/Code to reproduce bug
import cudf
df = cudf.DataFrame({"a": [1, None, 3, 4]})
print("CuDF dtypes:")
print(df.dtypes)
print("\nPandas dtypes:")
print(df.to_pandas().dtypes)
Output
CuDF dtypes:
a int64
dtype: object
Pandas dtypes:
a float64
dtype: object
Expected behavior
I would expect the integer column dtype to be preserved.
Environment overview (please complete the following information)
Environment details
Click here to see environment details
**git***
Not inside a git repository
***OS Information***
DGX_NAME="DGX Server"
DGX_PRETTY_NAME="NVIDIA DGX Server"
DGX_SWBUILD_DATE="2020-03-04"
DGX_SWBUILD_VERSION="4.4.0"
DGX_COMMIT_ID="ee09ebc"
DGX_PLATFORM="DGX Server for DGX-1"
DGX_SERIAL_NUMBER="QTFCOU65200BF-R1"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Linux dgx02 4.15.0-76-generic #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
***GPU Information***
Wed Oct 14 16:20:32 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00 Driver Version: 440.64.00 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:06:00.0 Off | 0 |
| N/A 32C P0 58W / 300W | 699MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:07:00.0 Off | 0 |
| N/A 31C P0 42W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:0A:00.0 Off | 0 |
| N/A 29C P0 41W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000000:0B:00.0 Off | 0 |
| N/A 27C P0 42W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 4 Tesla V100-SXM2... On | 00000000:85:00.0 Off | 0 |
| N/A 28C P0 41W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 5 Tesla V100-SXM2... On | 00000000:86:00.0 Off | 0 |
| N/A 30C P0 41W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 6 Tesla V100-SXM2... On | 00000000:89:00.0 Off | 0 |
| N/A 32C P0 40W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 7 Tesla V100-SXM2... On | 00000000:8A:00.0 Off | 0 |
| N/A 28C P0 41W / 300W | 12MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 76818 C ...mora/miniconda3/envs/cudf_16/bin/python 687MiB |
+-----------------------------------------------------------------------------+
***CPU***
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 2866.538
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4390.01
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
***CMake***
/datasets/rzamora/miniconda3/envs/cudf_16/bin/cmake
cmake version 3.18.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
***g++***
/usr/bin/g++
g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
***nvcc***
/usr/local/cuda/bin/nvcc
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
***Python***
/datasets/rzamora/miniconda3/envs/cudf_16/bin/python
Python 3.7.8
***Environment Variables***
PATH : /home/nfs/rzamora/.vscode-server/bin/93c2f0fbf16c5a4b10e4d5f89737d9c2c25488a3/bin:/home/nfs/rzamora/bin:/home/nfs/rzamora/.local/bin:/datasets/rzamora/miniconda3/envs/cudf_16/bin:/datasets/rzamora/miniconda3/condabin:/usr/local/cuda/bin:/opt/bin:/home/nfs/rzamora/.vscode-server/bin/93c2f0fbf16c5a4b10e4d5f89737d9c2c25488a3/bin:/home/nfs/rzamora/bin:/home/nfs/rzamora/.local/bin:/usr/local/cuda/bin:/opt/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
LD_LIBRARY_PATH :
NUMBAPRO_NVVM :
NUMBAPRO_LIBDEVICE :
CONDA_PREFIX : /datasets/rzamora/miniconda3/envs/cudf_16
PYTHON_PATH :
***conda packages***
/datasets/rzamora/miniconda3/condabin/conda
# packages in environment at /datasets/rzamora/miniconda3/envs/cudf_16:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_gnu conda-forge
abseil-cpp 20200225.2 he1b5a44_2 conda-forge
alabaster 0.7.12 py_0 conda-forge
appdirs 1.4.3 py_1 conda-forge
argon2-cffi 20.1.0 py37h8f50634_1 conda-forge
arrow-cpp 1.0.1 py37h1234567_1_cuda conda-forge
arrow-cpp-proc 1.0.1 cuda conda-forge
attrs 20.2.0 pyh9f0ad1d_0 conda-forge
autoconf 2.69 pl526h14c3975_9 conda-forge
automake 1.16.2 pl526_1 conda-forge
aws-sdk-cpp 1.7.164 hba45d7a_2 conda-forge
babel 2.8.0 py_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.1 py_0 conda-forge
black 20.8b1 py_1 conda-forge
bleach 3.1.5 pyh9f0ad1d_0 conda-forge
bokeh 2.2.1 py37hc8dfbb8_0 conda-forge
boost-cpp 1.72.0 h7b93d67_2 conda-forge
brotli 1.0.9 he1b5a44_0 conda-forge
brotlipy 0.7.0 py37h8f50634_1000 conda-forge
bzip2 1.0.8 h516909a_3 conda-forge
c-ares 1.16.1 h516909a_3 conda-forge
ca-certificates 2020.6.20 hecda079_0 conda-forge
certifi 2020.6.20 py37he5f6b98_2 conda-forge
cffi 1.14.1 py37h2b28604_0 conda-forge
cfgv 3.2.0 py_0 conda-forge
chardet 3.0.4 py37hc8dfbb8_1006 conda-forge
clang 8.0.1 hc9558a2_2 conda-forge
clang-tools 8.0.1 hc9558a2_2 conda-forge
clangxx 8.0.1 2 conda-forge
click 7.1.2 pyh9f0ad1d_0 conda-forge
cloudpickle 1.6.0 py_0 conda-forge
cmake 3.18.2 h5c55442_0 conda-forge
cmake_setuptools 0.1.3 py_0 rapidsai
commonmark 0.9.1 py_0 conda-forge
cryptography 3.1 py37hb09aad4_0 conda-forge
cudatoolkit 10.2.89 h6bb024c_0 nvidia
cudf 0.16.0a0+1884.g7ee4dcca53 pypi_0 pypi
cudnn 7.6.5 cuda10.2_0
cupy 7.8.0 py37h940342b_0 conda-forge
curl 7.71.1 he644dc0_5 conda-forge
cython 0.29.21 py37h3340039_0 conda-forge
cytoolz 0.10.1 py37h516909a_0 conda-forge
dask 2.28.0+67.g836525a8 dev_0 <develop>
dask-cuda 0.16.0a0+93.gb6cfacc dev_0 <develop>
dask-cudf 0.16.0a0+1865.g798267bd5d pypi_0 pypi
dataclasses 0.7 py37_0 conda-forge
decorator 4.4.2 py_0 conda-forge
defusedxml 0.6.0 py_0 conda-forge
distlib 0.3.1 pyh9f0ad1d_0 conda-forge
distributed 2.29.0 dev_0 <develop>
dlpack 0.3 he1b5a44_1 conda-forge
docutils 0.16 py37hc8dfbb8_1 conda-forge
double-conversion 3.1.5 he1b5a44_2 conda-forge
editdistance 0.5.3 py37h3340039_1 conda-forge
entrypoints 0.3 py37hc8dfbb8_1001 conda-forge
expat 2.2.9 he1b5a44_2 conda-forge
fastavro 1.0.0.post1 py37h8f50634_1 conda-forge
fastparquet 0.4.1 py37h03ebfcd_0 conda-forge
fastrlock 0.5 py37h3340039_0 conda-forge
filelock 3.0.12 pyh9f0ad1d_0 conda-forge
flake8 3.8.3 py_1 conda-forge
flatbuffers 1.12.0 he1b5a44_0 conda-forge
freetype 2.10.2 he06d7ca_0 conda-forge
fsspec 0.8.1 py_0 conda-forge
future 0.18.2 py37hc8dfbb8_1 conda-forge
gflags 2.2.2 he1b5a44_1004 conda-forge
glog 0.4.0 h49b9bf7_3 conda-forge
gmp 6.2.0 he1b5a44_2 conda-forge
grpc-cpp 1.30.2 heedbac9_0 conda-forge
heapdict 1.0.1 py_0 conda-forge
hypothesis 5.28.0 py_0 conda-forge
icu 67.1 he1b5a44_0 conda-forge
identify 1.5.0 pyh9f0ad1d_0 conda-forge
idna 2.10 pyh9f0ad1d_0 conda-forge
imagesize 1.2.0 py_0 conda-forge
importlib-metadata 1.7.0 py37hc8dfbb8_0 conda-forge
importlib_metadata 1.7.0 0 conda-forge
iniconfig 1.0.1 pyh9f0ad1d_0 conda-forge
ipykernel 5.3.4 py37h43977f1_0 conda-forge
ipython 7.18.1 py37hc6149b9_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
isort 5.0.7 py37hc8dfbb8_0 conda-forge
jedi 0.17.2 py37hc8dfbb8_0 conda-forge
jinja2 2.11.2 pyh9f0ad1d_0 conda-forge
jpeg 9d h516909a_0 conda-forge
json5 0.9.4 pyh9f0ad1d_0 conda-forge
jsonschema 3.2.0 py37hc8dfbb8_1 conda-forge
jupyter_client 6.1.7 py_0 conda-forge
jupyter_core 4.6.3 py37hc8dfbb8_1 conda-forge
jupyterlab 2.2.7 py_0 conda-forge
jupyterlab_server 1.2.0 py_0 conda-forge
krb5 1.17.1 hfafb76e_3 conda-forge
lcms2 2.11 hbd6801e_0 conda-forge
ld_impl_linux-64 2.34 hc38a660_9 conda-forge
libblas 3.8.0 17_openblas conda-forge
libcblas 3.8.0 17_openblas conda-forge
libcurl 7.71.1 hcdd3856_5 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 h516909a_1 conda-forge
libevent 2.1.10 hcdb4288_2 conda-forge
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 9.3.0 h24d8f2e_16 conda-forge
libgfortran-ng 7.5.0 hdf63c60_16 conda-forge
libgomp 9.3.0 h24d8f2e_16 conda-forge
libhwloc 2.3.0 h3c4fd83_0 conda-forge
libiconv 1.16 h516909a_0 conda-forge
liblapack 3.8.0 17_openblas conda-forge
libllvm10 10.0.1 he513fc3_3 conda-forge
libllvm8 8.0.1 hc9558a2_0 conda-forge
libnghttp2 1.41.0 h8cfc5f6_2 conda-forge
libopenblas 0.3.10 pthreads_hb3c22a3_4 conda-forge
libpng 1.6.37 hed695b0_2 conda-forge
libprotobuf 3.12.4 h8b12597_0 conda-forge
librmm 0.16.0a200909 cuda10.2_g9d02c5b_365 rapidsai-nightly
libsodium 1.0.18 h516909a_0 conda-forge
libssh2 1.9.0 hab1572f_5 conda-forge
libstdcxx-ng 9.3.0 hdf63c60_16 conda-forge
libtiff 4.1.0 hc7e4089_6 conda-forge
libtool 2.4.6 h516909a_1005 conda-forge
libutf8proc 2.5.0 h516909a_2 conda-forge
libuv 1.39.0 h516909a_0 conda-forge
libwebp-base 1.1.0 h516909a_3 conda-forge
libxml2 2.9.10 h68273f3_2 conda-forge
llvmlite 0.34.0 py37h5202443_1 conda-forge
locket 0.2.0 py_2 conda-forge
lz4-c 1.9.2 he1b5a44_3 conda-forge
m4 1.4.18 h516909a_1001 conda-forge
make 4.3 h516909a_0 conda-forge
markdown 3.2.2 py_0 conda-forge
markupsafe 1.1.1 py37h8f50634_1 conda-forge
mccabe 0.6.1 py_1 conda-forge
mimesis 4.0.0 pyh9f0ad1d_0 conda-forge
mistune 0.8.4 py37h8f50634_1001 conda-forge
more-itertools 8.5.0 py_0 conda-forge
msgpack-python 1.0.0 py37h99015e2_1 conda-forge
mypy_extensions 0.4.3 py37hc8dfbb8_1 conda-forge
nbconvert 5.6.1 py37hc8dfbb8_1 conda-forge
nbformat 5.0.7 py_0 conda-forge
nbsphinx 0.7.1 pyh9f0ad1d_0 conda-forge
nccl 2.7.8.1 hc6a2c23_0 conda-forge
ncurses 6.2 he1b5a44_1 conda-forge
nodeenv 1.5.0 pyh9f0ad1d_0 conda-forge
notebook 6.1.3 py37hc8dfbb8_0 conda-forge
numba 0.51.2 py37h9fdb41a_0 conda-forge
numpy 1.19.1 py37h7ea13bd_2 conda-forge
numpydoc 1.1.0 pyh9f0ad1d_0 conda-forge
nvtabular 0.2.0 dev_0 <develop>
nvtx 0.2.1 py37h8f50634_2 conda-forge
olefile 0.46 py_0 conda-forge
openssl 1.1.1h h516909a_0 conda-forge
packaging 20.4 pyh9f0ad1d_0 conda-forge
pandas 1.1.2 py37h3340039_0 conda-forge
pandoc 1.19.2 0 conda-forge
pandocfilters 1.4.2 py_1 conda-forge
parquet-cpp 1.5.1 2 conda-forge
parso 0.7.1 pyh9f0ad1d_0 conda-forge
partd 1.1.0 py_0 conda-forge
pathspec 0.8.0 pyh9f0ad1d_0 conda-forge
perl 5.26.2 h516909a_1006 conda-forge
pexpect 4.8.0 py37hc8dfbb8_1 conda-forge
pickleshare 0.7.5 py37hc8dfbb8_1001 conda-forge
pillow 7.2.0 py37h718be6c_1 conda-forge
pip 20.2.3 py_0 conda-forge
pkg-config 0.29.2 h516909a_1006 conda-forge
pluggy 0.13.1 py37hc8dfbb8_2 conda-forge
pre-commit 2.7.1 py37hc8dfbb8_0 conda-forge
pre_commit 2.7.1 0 conda-forge
prometheus_client 0.8.0 pyh9f0ad1d_0 conda-forge
prompt-toolkit 3.0.7 py_0 conda-forge
psutil 5.7.2 py37h8f50634_0 conda-forge
ptyprocess 0.6.0 py_1001 conda-forge
py 1.9.0 pyh9f0ad1d_0 conda-forge
py-spy 0.3.3 pypi_0 pypi
pyarrow 1.0.1 py37h1234567_1_cuda conda-forge
pycodestyle 2.6.0 pyh9f0ad1d_0 conda-forge
pycparser 2.20 pyh9f0ad1d_2 conda-forge
pyflakes 2.2.0 pyh9f0ad1d_0 conda-forge
pygments 2.6.1 py_0 conda-forge
pynvml 8.0.4 pypi_0 pypi
pyopenssl 19.1.0 py_1 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyrsistent 0.17.0 py37h8f50634_0 conda-forge
pysocks 1.7.1 py37hc8dfbb8_1 conda-forge
pytest 6.1.0 py37hc8dfbb8_0 conda-forge
python 3.7.8 h6f2ec95_1_cpython conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python-snappy 0.5.4 py37h7cfaab3_1 conda-forge
python_abi 3.7 1_cp37m conda-forge
pytz 2020.1 pyh9f0ad1d_0 conda-forge
pyyaml 5.3.1 py37h8f50634_0 conda-forge
pyzmq 19.0.2 py37hac76be4_0 conda-forge
rapidjson 1.1.0 he1b5a44_1002 conda-forge
re2 2020.08.01 he1b5a44_0 conda-forge
readline 8.0 he28a2e2_2 conda-forge
recommonmark 0.6.0 py_0 conda-forge
regex 2020.7.14 py37h8f50634_0 conda-forge
requests 2.24.0 pyh9f0ad1d_0 conda-forge
rhash 1.3.6 h14c3975_1001 conda-forge
rmm 0.16.0a200909 cuda_10.2_py37_g9d02c5b_365 rapidsai-nightly
send2trash 1.5.0 py_0 conda-forge
setuptools 49.6.0 py37hc8dfbb8_0 conda-forge
six 1.15.0 pyh9f0ad1d_0 conda-forge
snappy 1.1.8 he1b5a44_3 conda-forge
snowballstemmer 2.0.0 py_0 conda-forge
sortedcontainers 2.2.2 pyh9f0ad1d_0 conda-forge
spdlog 1.7.0 hc9558a2_2 conda-forge
sphinx 3.2.1 py_0 conda-forge
sphinx-copybutton 0.3.0 pyh9f0ad1d_0 conda-forge
sphinx-markdown-tables 0.0.14 pyh9f0ad1d_1 conda-forge
sphinx_rtd_theme 0.5.0 pyh9f0ad1d_0 conda-forge
sphinxcontrib-applehelp 1.0.2 py_0 conda-forge
sphinxcontrib-devhelp 1.0.2 py_0 conda-forge
sphinxcontrib-htmlhelp 1.0.3 py_0 conda-forge
sphinxcontrib-jsmath 1.0.1 py_0 conda-forge
sphinxcontrib-qthelp 1.0.3 py_0 conda-forge
sphinxcontrib-serializinghtml 1.1.4 py_0 conda-forge
sphinxcontrib-websupport 1.2.4 pyh9f0ad1d_0 conda-forge
sqlite 3.33.0 h4cf870e_0 conda-forge
streamz 0.5.5 pypi_0 pypi
tblib 1.6.0 py_0 conda-forge
terminado 0.8.3 py37hc8dfbb8_1 conda-forge
testpath 0.4.4 py_0 conda-forge
thrift 0.11.0 py37he1b5a44_1001 conda-forge
thrift-cpp 0.13.0 h62aa4f2_3 conda-forge
tk 8.6.10 hed695b0_0 conda-forge
toml 0.10.1 pyh9f0ad1d_0 conda-forge
toolz 0.10.0 py_0 conda-forge
tornado 6.0.4 py37h8f50634_1 conda-forge
traitlets 5.0.4 py_0 conda-forge
typed-ast 1.4.1 py37h516909a_0 conda-forge
typing_extensions 3.7.4.2 py_0 conda-forge
ucx-py 0.16.0a0+185.gc3dd8f9 dev_0 <develop>
urllib3 1.25.10 py_0 conda-forge
virtualenv 20.0.20 py37hc8dfbb8_1 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_1 conda-forge
webencodings 0.5.1 py_1 conda-forge
wheel 0.35.1 pyh9f0ad1d_0 conda-forge
xz 5.2.5 h516909a_1 conda-forge
yaml 0.2.5 h516909a_0 conda-forge
zeromq 4.3.2 he1b5a44_3 conda-forge
zict 2.0.0 py_0 conda-forge
zipp 3.1.0 py_0 conda-forge
zlib 1.2.11 h516909a_1009 conda-forge
zstd 1.4.5 h6597ccf_2 conda-forge
Additional context
This bug is leading to incorrect categorical encodings in NVTabular in some cases - cc @rnyak @benfred
(problem is only with searchsorted-based encoding)
@rjzamora Will adding nullable parameter support to to_pandas help?
df.to_pandas(nullable=True) # This will retrun pandas nullable dtypes
df.to_pandas(nullable=False) # This will retrun pandas non-nullable dtypes(Current behavior)
@rjzamora Will adding nullable parameter support to to_pandas help?
That seems reasonable to me - Could nullable=True be the default? Or is there a motivation to avoid this?
@rjzamora last release we played around with returning nullable pandas dtypes by default. We actually merged that change and let it propagate to the nightlies for a few days IIRC. Where it fell apart was when users tried to use cuDF along with other external python libraries (sklearn, xgboost etc) where those libraries expected numpy dtypes and didn't have the machinery in place to handle nullable pandas dtypes. When those workflows broke, we got some pushback from those users. In the end, the pandas team told us it was a little early to enforce nullable types by default, even on their end, although I think that's where they say they're going long term.
Just a little context around the float casting. Back in 0.14/0.15ish, we actually did return an int type even if the column had nulls. To make it work we filled the nulls with sys.min_int before dumping to pandas. This was in my opinion a rather dangerous approach since the user could end up basically with very quiet data corruption.
Since we really didn't want to be responsible for deciding how to fill nulls for the user, in the end, we decided to pass the responsibility for handling that situation off to arrow, and now cudf_obj.to_pandas() is basically implemented as cudf_obj.to_arrow().to_pandas(). The consequence of this of course is that our answer to this situation is their answer, which for now is float.
As an aisde, not knowing much about NVTabular - is it set up to handle nullable pandas dtypes? They unfortunately don't work with the NumPy API, so if you did want to use them you might find yourself in a situation where your type-handling code needs separate branches for pd.Int64Dtype(), etc. This is actually the main issue with them everywhere right now. I might suggest hacking in an upcast to the nullable type before the NVTabular code in question runs and seeing if it works. If there's much type checking or numpy API calls it might actually not work.
Oops - sorry didn't mean to close - reopened
Thanks for the context @galipremsagar and @brandon-b-miller - That is very useful information. Overall, it seems reasonable to avoid making nullable=True the default.
Some background: NVTabular is only using to_pandas/from_pandas to perform "preemptive" device-host spilling between certain groupby-aggregation tasks (related to categorical encoding). By doing this, we can (1) control spilling with/without a distributed cluster, and (2) can concatenate dataframes with pandas before moving data back to the device. Therefore, concat is the only operations we every perform with pandas before converting back to cudf. So, please do advise if it may be dangerous to rely on nullable types during this round-trip process.
cuDF should support construction from nullable pandas dtype objects cleanly so if all you need to do is roundtrip then it shouldn't affect anything. Only code like this is dangerous:
pandas_obj = cudf_obj.to_pandas(nullable=True)
code_with_numpy_dtype_api_calls(pandas_obj)
Some background: NVTabular is only using to_pandas/from_pandas to perform "preemptive" device-host spilling between certain groupby-aggregation tasks (related to categorical encoding). By doing this, we can (1) control spilling with/without a distributed cluster, and (2) can concatenate dataframes with pandas before moving data back to the device. Therefore, concat is the only operations we every perform with pandas before converting back to cudf. So, please do advise if it may be dangerous to rely on nullable types during this round-trip process.
Even with nullable type support this is very expensive as the nullmap data representation is different between Pandas and cuDF. cuDF uses a bitmask (bit per value indicating validity) whereas Pandas uses a boolmask (byte per value indicating validity).
I would suggest using pyarrow and pyarrow.concat_tables for this instead which mirrors our data layout and will likely be faster than Pandas as well. This can then go through our from_arrow libcudf machinery which gives us more opportunities for optimization.
I would suggest using pyarrow and pyarrow.concat_tables for this instead which mirrors our data layout and will likely be faster than Pandas as well. This can then go through our from_arrow libcudf machinery which gives us more opportunities for optimization.
Right - I was thinking the same thing. We were using pandas to leverage a dask dispatch for concat, but we can certainly be a bit more explicit and use arrow.
I might be missing something super obvious here - but it doesn't look to me that DataFrame.to_pandas/Series.to_pandas methods take a 'nullable' argument .
I don't see any reference to a nullable argument in DataFrame.to_pandas anyways:
https://github.com/rapidsai/cudf/blob/c6dfa6e3ed8cf898faed244146c4da7ffd5d7cc7/python/cudf/cudf/core/dataframe.py#L4834-L4873
(also, I don't think the kwargs is used at all in that method and should probably be removed? **kwargs use should be limited imo - it makes figuring out what parameters a function takes harder than it should be)
We had it in and removed it, but have discussed putting it back in as False by default. At the time it was tough to get it to work cleanly and I think we were holding out for a more comprehensive ecosystem wide solution to come into focus, in lieu of anyone really needing it. But if there's a use case we can theoretically support it again.
We had it in and removed it, but have discussed putting it back in as False by default. At the time it was tough to get it to work cleanly and I think we were holding out for a more comprehensive ecosystem wide solution to come into focus, in lieu of anyone really needing it. But if there's a use case we can theoretically support it again.
Ahh cool - glad to know I wasn't missing something obvious =).
For NVTabular, it sounds like we should move to spilling to pyarrow instead of using pandas, so I don't think we count as a use case here. I wouldn't re-add this param just for us.
For NVTabular, it sounds like we should move to spilling to pyarrow instead of using pandas, so I don't think we count as a use case here. I wouldn't re-add this param just for us.
I think I agree with you @benfred - I'll close this issue for now.
This issue will be addressed as part of #6614 where we are introducing a nullable parameter in to_pandas with default value as False.
Most helpful comment
@rjzamora last release we played around with returning nullable pandas dtypes by default. We actually merged that change and let it propagate to the nightlies for a few days IIRC. Where it fell apart was when users tried to use cuDF along with other external python libraries (sklearn, xgboost etc) where those libraries expected numpy dtypes and didn't have the machinery in place to handle nullable pandas dtypes. When those workflows broke, we got some pushback from those users. In the end, the pandas team told us it was a little early to enforce nullable types by default, even on their end, although I think that's where they say they're going long term.
Just a little context around the float casting. Back in 0.14/0.15ish, we actually did return an int type even if the column had nulls. To make it work we filled the nulls with
sys.min_intbefore dumping to pandas. This was in my opinion a rather dangerous approach since the user could end up basically with very quiet data corruption.Since we really didn't want to be responsible for deciding how to fill nulls for the user, in the end, we decided to pass the responsibility for handling that situation off to arrow, and now
cudf_obj.to_pandas()is basically implemented ascudf_obj.to_arrow().to_pandas(). The consequence of this of course is that our answer to this situation is their answer, which for now is float.