After a reboot this error appears when trying to restart a container.
Error response from daemon: linux runtime spec devices: error gathering device information while adding custom device "/dev/nvidia-uvm-tools": lstat /dev/nvidia-uvm-tools: no such file or directory
Error: failed to start containers:
Ubuntu: trusty
Driver Version: 352.93
CUDA: 7.5 (from apt)
Did you change your driver version recently?
Yes that was it, ill close this.
I have the same issue
How to solve it ?
Problem:
My situation has been that when I use AWS gpu instances with docker the driver when I restart the machine. When this occurs I simply run the command to install the driver 'sudo apt-get install nvidia-
Solution (more of a recovery strategy):
-Create a new docker container. If it doesn't work then follow the advice here https://github.com/NVIDIA/nvidia-docker/issues/288
-From the host machine issue the following comand sudo find -name / 'filesyouneed' 2>/dev/null
(you could probably replace / with /var/lib/docker/aufs/mnt/ )
-Then copy all the files you need from the host to the docker sudo docker cp /src [docker name]:/dest
That should be it. It's hacky, but it works and you only lose about 15 minutes.
Most helpful comment
I have the same issue
How to solve it ?