Nvidia-docker: nvidia-smi test fails after fresh install

Created on 9 Apr 2016  路  3Comments  路  Source: NVIDIA/nvidia-docker

I have CUDA version 6.5 installed on Ubuntu 14.04:

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2014 NVIDIA Corporation
Built on Thu_Jul_17_21:41:27_CDT_2014
Cuda compilation tools, release 6.5, V6.5.12

However, when i run the nvidia-smi test, i get this confusing result:

$ nvidia-docker run --rm nvidia/cuda nvidia-smi 
nvidia-docker | 2016/04/09 00:07:59 Error: unsupported CUDA version: 7.5 < 6.5
work as intended

Most helpful comment

You are trying to run CUDA toolkit 7.5 (default CUDA image) on a CUDA driver 6.5.
The error tells you that your driver simply doesn't support it (see here for the minimum version required).

You can either upgrade your host NVIDIA drivers or use a 6.5 toolkit image:

nvidia-docker run --rm nvidia/cuda:6.5 nvidia-smi

All 3 comments

You are trying to run CUDA toolkit 7.5 (default CUDA image) on a CUDA driver 6.5.
The error tells you that your driver simply doesn't support it (see here for the minimum version required).

You can either upgrade your host NVIDIA drivers or use a 6.5 toolkit image:

nvidia-docker run --rm nvidia/cuda:6.5 nvidia-smi

Note that you don't need to upgrade your CUDA toolkit on the host! You can just update the driver but keep CUDA 6.5. Afterwards you will be able to run CUDA 6.5, 7.0 and 7.5 in containers and 6.5 on the host.

Oh ok, I see. I was just confused by the direction of the '<' in the error message.

Great, thanks!

Was this page helpful?
0 / 5 - 0 ratings