Nvidia-docker: no such volume: nvidia_driver_352.79

Created on 20 Apr 2016  路  10Comments  路  Source: NVIDIA/nvidia-docker

After upgrade docker to 1.11 working nvidia-docker gone to "unsupported version" state. I've followed recommendation and download source zip from here, make deb and installed it. After I've got this:

nvidia-docker run --rm nvidia/cuda nvidia-smi
docker: Error response from daemon: no such volume: nvidia_driver_352.79.

Reinstalling with package purging didnt helps

upstream issue

Most helpful comment

I've seen something similar in the past. Docker somehow thinks that the volume has been created and keeps this information in cache.
I don't know how to force it to query the plugin properly without restarting it.
This might be related to https://github.com/docker/docker/issues/20608

If you really want to workaround it without restarting the daemon this might do it:

# trick the plugin into thinking that the volume exists
sudo mkdir -p /var/lib/nvidia-docker/volumes/nvidia_driver/352.79
sudo chown -R nvidia-docker: /var/lib/nvidia-docker

# dry run
nvidia-docker run --rm nvidia/cuda

# now the nvidia volume should be listed
nvidia-docker volume ls

# if it works, delete the (fake) volume
nvidia-docker volume rm nvidia_driver_352.79

# test if it's fixed
nvidia-docker run --rm nvidia/cuda nvidia-smi

All 10 comments

I kept getting that error, too, but I assumed it was because I had re-installed my driver.
Some combination of these commands got me around the issue eventually.

sudo nvidia-docker volume setup
sudo reboot
nvidia-docker run --rm nvidia/cuda nvidia-smi

Yes, if you upgraded your driver you need to either (depending on your installation):

  • restart the plugin sudo restart nvidia-docker
  • redo sudo nvidia-docker volume setup

Otherwise can you show me the output of:
cat /var/log/upstart/nvidia-docker.log
nvidia-docker volume ls | grep nvidia

service restart nvidia-docker
nvidia-docker volume setup
didn't help
mine logs are :
cat /var/log/upstart/nvidia-docker.log
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:40:44 Received mount request for volume 'nvidia_driver_352.79'
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:17 Successfully terminated
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:18 Loading NVIDIA unified memory
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:18 Loading NVIDIA management library
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:28 Discovering GPU devices
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:30 Provisioning volumes at /var/lib/nvidia-docker/volumes
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:30 Serving plugin API at /var/lib/nvidia-docker
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:30 Serving remote API at localhost:3476
/usr/bin/nvidia-docker-plugin | 2016/04/20 17:42:31 Received mount request for volume 'nvidia_driver_352.79'

nothing on
nvidia-docker volume ls | grep nvidia

Weird, Docker should have sent a create request before the mount request.
Try restarting docker: sudo restart docker
Also can you show me the output of ls -lR /var/lib/nvidia-docker/volumes

ls -lR /var/lib/nvidia-docker/volumes
ls: cannot access /var/lib/nvidia-docker/volumes: No such file or directory

should I really restart docker now? It have a lot of running regular containers, not nvidia-docker ones

I've seen something similar in the past. Docker somehow thinks that the volume has been created and keeps this information in cache.
I don't know how to force it to query the plugin properly without restarting it.
This might be related to https://github.com/docker/docker/issues/20608

If you really want to workaround it without restarting the daemon this might do it:

# trick the plugin into thinking that the volume exists
sudo mkdir -p /var/lib/nvidia-docker/volumes/nvidia_driver/352.79
sudo chown -R nvidia-docker: /var/lib/nvidia-docker

# dry run
nvidia-docker run --rm nvidia/cuda

# now the nvidia volume should be listed
nvidia-docker volume ls

# if it works, delete the (fake) volume
nvidia-docker volume rm nvidia_driver_352.79

# test if it's fixed
nvidia-docker run --rm nvidia/cuda nvidia-smi

yep, it seems to be fixed indeed. Please, add this solution to any kind of FAQ.

Yes we will probably document it somewhere, hopefully Docker will address this issue.

I have enccounter a problem as this. when i run nvidia-docker run, i get the message: Error response from daemon: create nvidia_driver_410.93: Post http://%2Fvar%2Flib%2Fnvidia-docker%2Fnvidia-docker.sock/VolumeDriver.Create: dial unix /var/lib/nvidia-docker/nvidia-docker.sock: connect: no such file or directory. But i start my container that I created in the last year, i get the message Error response from daemon: OCI runtime create failed: flag provided but not defined: -console-socket: unknown
Error: failed to start containers: web. So I am crazy for this. Hope to get your help. Thanks, the below is my environment
Docker version 18.09.3, build 774a1f4

Client:
Version: 18.09.3
API version: 1.39
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 06:40:58 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.3
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 05:59:55 2019
OS/Arch: linux/amd64
Experimental: false

runc version 1.0.0-rc6+dev
commit: 6635b4f0c6af3810594d2770f662f34ddc15b40d
spec: 1.0.1-dev

Was this page helpful?
0 / 5 - 0 ratings

Related issues

henry-blip picture henry-blip  路  3Comments

vak picture vak  路  4Comments

mythly picture mythly  路  3Comments

SpotCrowdTech picture SpotCrowdTech  路  3Comments

agnis84 picture agnis84  路  4Comments