I pulled the latest version of the Docker container and on running it, get the error:
GDFError: CUDA ERROR. cudaErrorInsufficientDriver: CUDA driver version is insufficient for CUDA runtime version
I am using drivers version 410, but I tried downgrading to 396 and still got the same error.
The old version of the container still works fine in the same configuration.
I do not have CUDA installed natively on the PCs, I use the one in the appropriate container.
Here is the relevant output from within the container. I am running it on a Dell Alienware laptop with GTx 1070 GPU. It was working fine. I have also tried other computers, with different GPUs and get the same error.
I am running Ubuntu 18.04 Bionic
The command I used was:
docker run --runtime=nvidia --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 -v /home/john/Data/:/data/ -v /home/john/Source/Notebooks:/rapids/work rapidsai/rapidsai:latest
The CUDA version (within the container) is:
!nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148
NVIDIA drivers within the container.
```
!nvidia-smi
Sat Nov 3 22:08:44 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.73 Driver Version: 410.73 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| N/A 50C P8 10W / N/A | 321MiB / 8111MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
````
A few people have run into a similar issue. To determine if there is a conflict when loading and using libcuda.so when using cudf, could you please run
echo $LD_LIBRARY_PATH
If you see a path to /path/to/nvidia/library/stubs or similar, could you remove the reference, and rerun the notebook?
(e.g.) if you see
root@6067006e61c5:/# echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64/stubs:
run
export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64:
Assuming this fixed your problem. Closing.
Apologies, my Github account was locked out, due to a hacking attempt. Thank you @mt-jones, that cured it.
Most helpful comment
Apologies, my Github account was locked out, due to a hacking attempt. Thank you @mt-jones, that cured it.