I have installed cuda 9.0:
user:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
Yet when i run the docker, I get an error:
user:~$ sudo docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"process_linux.go:351: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=9.0 --pid=4781 /var/lib/docker/overlay2/7c70a2548b24e6186519dded1f09ac9b7cb30115195d7673cd081df76f8ab327/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: cuda >= 9.0\\\\n\\\"\""
docker: Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"process_linux.go:351: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=9.0 --pid=4781 /var/lib/docker/overlay2/7c70a2548b24e6186519dded1f09ac9b7cb30115195d7673cd081df76f8ab327/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: cuda >= 9.0\\\\n\\\"\"".
See the documentation FAQs
Wouldn't improving the error message be an option here? It seems I'm not the first with this problem.
@adler-j do you solve this problem?
@simfeng update your driver
@flx42 Yes! It works for me.
@flx42 my gpus drivers? or what? I can't resolve the problem
Thx
Yeah, its the GPU drivers, but I'd still want to press that a better error message would help so much here.
Stuck with this issue too.Please reopen this and enhance the error message
Solved with update cuda & drivers
nvidia-docker can use multiple versions of CUDA, but I think it can NOT use the multiple versions of NVIDIA driver. The version of CUDA which is available on nvidia-docker depends on the version of NVIDIA driver installed in host machine. Here is the dependency.
If you want to use CUDA 9.0, the version of NVIDIA in your host is higher than 384.81.
You can check the version of your driver, using the following command,
nvidia-smi
The first line of output is the version.
You should update the driver or choose available version of CUDA.
If you use tensorlfow, the dependency between the versions of tensorflow and CUDA is here.
occurring the same error on nvidia driver 390.59 and cuda 9.0 on the host machine, which seems satisfied the requirement, any suggestion ?
what is the validated version which can be run successfully?
Most helpful comment
Wouldn't improving the error message be an option here? It seems I'm not the first with this problem.