$ docker info | grep Runtime
Runtimes: nvidia runc
Default Runtime: runc
$ nvidia-docker run --rm nvidia/cuda nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 --pid=13282 /var/lib/docker/aufs/mnt/3e046b735f37611942042c9afb359014b00b8cb804e833d7034bb5add05f406e]\\\\nnvidia-container-cli: initialization error: cuda error: unknown error\\\\n\\\"\"": unknown.
md5-75be415a8bf7509c5c17c99dbad55d53
$ optirun nvidia-container-cli -k -d /dev/tty info
-- WARNING, the following logs are for debugging purposes only --
I0714 08:03:10.566960 13347 nvc.c:281] initializing library context (version=1.0.2, build=0000000000000000000000000000000000000000)
I0714 08:03:10.567165 13347 nvc.c:255] using root /
I0714 08:03:10.567196 13347 nvc.c:256] using ldcache /etc/ld.so.cache
I0714 08:03:10.567260 13347 nvc.c:257] using unprivileged user 1000:1000
W0714 08:03:10.569639 13348 nvc.c:186] failed to set inheritable capabilities
W0714 08:03:10.569790 13348 nvc.c:187] skipping kernel modules load due to failure
I0714 08:03:10.570677 13349 driver.c:133] starting driver service
I0714 08:03:10.625727 13347 driver.c:231] driver service terminated with signal 15
nvidia-container-cli: initialization error: cuda error: unknown error
md5-41e50d3c826db3eaf7b29098b68bb52a
$ uname -a
Linux xps 5.1.16-1-MANJARO #1 SMP PREEMPT Thu Jul 4 20:32:22 UTC 2019 x86_64 GNU/Linux
md5-300c6c041233adf845018f144d4902be
[12579.818768] bbswitch: enabling discrete graphics
[12580.781447] nvidia-nvlink: Nvlink Core is being initialized, major device number 237
[12580.782551] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=none:owns=none
[12580.782996] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 390.116 Sun Jan 27 07:21:36 PST 2019 (using threaded interrupts)
[12581.691364] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 390.116 Sun Jan 27 06:30:32 PST 2019
[12582.938261] nvidia-modeset: Unloading
[12582.962664] nvidia-nvlink: Unregistered the Nvlink Core, major device number 237
[12583.012895] bbswitch: disabling discrete graphics
[12583.029982] pci 0000:01:00.0: Refused to change power state, currently in D0
[12614.660571] docker0: port 1(veth089825f) entered blocking state
[12614.660580] docker0: port 1(veth089825f) entered disabled state
[12614.660796] device veth089825f entered promiscuous mode
[12614.663447] audit: type=1700 audit(1563091159.746:138): dev=veth089825f prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295
[12615.904779] eth0: renamed from vethce8aa49
[12615.925737] IPv6: ADDRCONF(NETDEV_CHANGE): veth089825f: link becomes ready
[12615.925831] docker0: port 1(veth089825f) entered blocking state
[12615.925839] docker0: port 1(veth089825f) entered forwarding state
[12616.516353] docker0: port 1(veth089825f) entered disabled state
[12616.516631] vethce8aa49: renamed from eth0
[12616.683160] docker0: port 1(veth089825f) entered disabled state
[12616.692399] device veth089825f left promiscuous mode
[12616.692443] docker0: port 1(veth089825f) entered disabled state
[12616.692487] audit: type=1700 audit(1563091161.766:139): dev=veth089825f prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
[12704.324225] audit: type=1130 audit(1563091249.410:140): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[12714.337825] audit: type=1131 audit(1563091259.423:141): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[12768.536106] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[12768.545813] ata2.00: configured for UDMA/100
[12782.295549] docker0: port 1(vethc6b451c) entered blocking state
[12782.295586] docker0: port 1(vethc6b451c) entered disabled state
[12782.295734] device vethc6b451c entered promiscuous mode
[12782.295831] audit: type=1700 audit(1563091327.380:142): dev=vethc6b451c prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295
[12783.570531] eth0: renamed from veth503f302
[12783.590650] IPv6: ADDRCONF(NETDEV_CHANGE): vethc6b451c: link becomes ready
[12783.590725] docker0: port 1(vethc6b451c) entered blocking state
[12783.590730] docker0: port 1(vethc6b451c) entered forwarding state
[12784.181853] docker0: port 1(vethc6b451c) entered disabled state
[12784.181982] veth503f302: renamed from eth0
[12784.381671] docker0: port 1(vethc6b451c) entered disabled state
[12784.390339] device vethc6b451c left promiscuous mode
[12784.390379] docker0: port 1(vethc6b451c) entered disabled state
[12784.390480] audit: type=1700 audit(1563091329.467:143): dev=vethc6b451c prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
[12842.678281] bbswitch: enabling discrete graphics
[12843.646277] nvidia-nvlink: Nvlink Core is being initialized, major device number 237
[12843.647035] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=none:owns=none
[12843.647361] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 390.116 Sun Jan 27 07:21:36 PST 2019 (using threaded interrupts)
[12844.553340] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 390.116 Sun Jan 27 06:30:32 PST 2019
[12845.716392] nvidia-modeset: Unloading
[12845.742693] nvidia-nvlink: Unregistered the Nvlink Core, major device number 237
[12845.821806] bbswitch: disabling discrete graphics
[12845.838644] pci 0000:01:00.0: Refused to change power state, currently in D0
md5-aa381e4677a25dbea037609a242baa2b
$ optirun nvidia-smi -a
==============NVSMI LOG==============
Timestamp : Sun Jul 14 13:36:13 2019
Driver Version : 390.116
Attached GPUs : 1
GPU 00000000:01:00.0
Product Name : GeForce GT 540M
Product Brand : GeForce
Display Mode : N/A
Display Active : N/A
Persistence Mode : Disabled
Accounting Mode : N/A
Accounting Mode Buffer Size : N/A
Driver Model
Current : N/A
Pending : N/A
Serial Number : N/A
GPU UUID : GPU-af2f39e5-2912-bf8d-f068-0ed77b486849
Minor Number : 0
VBIOS Version : 70.08.44.00.11
MultiGPU Board : N/A
Board ID : N/A
GPU Part Number : N/A
Inforom Version
Image Version : N/A
OEM Object : N/A
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GPU Virtualization Mode
Virtualization mode : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x0DF410DE
Bus Id : 00000000:01:00.0
Sub System Id : 0x050E1028
GPU Link Info
PCIe Generation
Max : N/A
Current : N/A
Link Width
Max : N/A
Current : N/A
Bridge Chip
Type : N/A
Firmware : N/A
Replays since reset : N/A
Tx Throughput : N/A
Rx Throughput : N/A
Fan Speed : N/A
Performance State : P0
Clocks Throttle Reasons : N/A
FB Memory Usage
Total : 1985 MiB
Used : 7 MiB
Free : 1978 MiB
BAR1 Memory Usage
Total : N/A
Used : N/A
Free : N/A
Compute Mode : Default
Utilization
Gpu : N/A
Memory : N/A
Encoder : N/A
Decoder : N/A
Encoder Stats
Active Sessions : N/A
Average FPS : N/A
Average Latency : N/A
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending : N/A
Temperature
GPU Current Temp : 51 C
GPU Shutdown Temp : N/A
GPU Slowdown Temp : N/A
GPU Max Operating Temp : N/A
Memory Current Temp : N/A
Memory Max Operating Temp : N/A
Power Readings
Power Management : N/A
Power Draw : N/A
Power Limit : N/A
Default Power Limit : N/A
Enforced Power Limit : N/A
Min Power Limit : N/A
Max Power Limit : N/A
Clocks
Graphics : N/A
SM : N/A
Memory : N/A
Video : N/A
Applications Clocks
Graphics : N/A
Memory : N/A
Default Applications Clocks
Graphics : N/A
Memory : N/A
Max Clocks
Graphics : N/A
SM : N/A
Memory : N/A
Video : N/A
Max Customer Boost Clocks
Graphics : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes : N/A
md5-fe409c2f06d02383ef982514f1695fcf
$ docker version
Client:
Version: 18.09.7-ce
API version: 1.39
Go version: go1.12.6
Git commit: 2d0083d657
Built: Tue Jul 2 01:00:04 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.7-ce
API version: 1.39 (minimum version 1.12)
Go version: go1.12.6
Git commit: 2d0083d657
Built: Tue Jul 2 00:59:35 2019
OS/Arch: linux/amd64
Experimental: false
md5-2f51f3b3921fd9ce9f1570fd66a29f71
$ pacman -Qs nvidia
local/bumblebee 3.2.1-22
NVIDIA Optimus support for Linux through Primus/VirtualGL
local/lib32-nvidia-390xx-utils 390.116-1
NVIDIA drivers utilities (32-bit)
local/libnvidia-container 1.0.2-1
NVIDIA container runtime library
local/libnvidia-container-tools 1.0.2-1
NVIDIA container runtime library
local/libvdpau 1.2-1
Nvidia VDPAU library
local/linux414-nvidia-390xx 390.116-24 (linux414-extramodules)
NVIDIA drivers for linux.
local/linux51-nvidia-390xx 390.116-12 (linux51-extramodules)
NVIDIA drivers for linux.
local/mhwd-nvidia 1:430.26-1
MHWD module-ids for nvidia 430.26
local/mhwd-nvidia-340xx 340.107-1
MHWD module-ids for nvidia 340.107
local/mhwd-nvidia-390xx 390.116-1
MHWD module-ids for nvidia 390.116
local/nvidia-390xx-utils 390.116-1
NVIDIA drivers utilities
local/nvidia-cg-toolkit 3.1-5
NVIDIA Cg libraries
local/nvidia-container-runtime 2.0.0+3.docker18.09.6-1
NVIDIA opencontainer runtime fork to expose GPU devices to containers.
local/nvidia-container-runtime-hook 1.4.0-1
NVIDIA container runtime hook
local/nvidia-docker 2.0.3-4
Build and run Docker containers leveraging NVIDIA GPUs
md5-38e2356e205c0bdddf7ff440ccd26b71
version: 1.0.2
build date: 2019-07-07T10:15+00:00
build revision: 0000000000000000000000000000000000000000
build compiler: gcc 9.1.0
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -I/usr/include/tirpc -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -fno-plt -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now
Can you run sudo for nvidia-container-cli command?
sudo nvidia-container-cli -k -d /dev/tty info
[anks@xps ~]$ sudo nvidia-container-cli -k -d /dev/tty info
[sudo] password for anks:
-- WARNING, the following logs are for debugging purposes only --
I0721 10:47:16.896725 1603 nvc.c:281] initializing library context (version=1.0.2, build=0000000000000000000000000000000000000000)
I0721 10:47:16.897078 1603 nvc.c:255] using root /
I0721 10:47:16.897129 1603 nvc.c:256] using ldcache /etc/ld.so.cache
I0721 10:47:16.897184 1603 nvc.c:257] using unprivileged user 65534:65534
W0721 10:47:16.899422 1603 nvc.c:171] failed to detect NVIDIA devices
I0721 10:47:16.900071 1604 nvc.c:191] loading kernel module nvidia
E0721 10:47:16.902898 1604 nvc.c:193] could not load kernel module nvidia
I0721 10:47:16.902978 1604 nvc.c:203] loading kernel module nvidia_uvm
E0721 10:47:16.905133 1604 nvc.c:205] could not load kernel module nvidia_uvm
I0721 10:47:16.905180 1604 nvc.c:211] loading kernel module nvidia_modeset
E0721 10:47:16.907334 1604 nvc.c:213] could not load kernel module nvidia_modeset
I0721 10:47:16.908398 1605 driver.c:133] starting driver service
I0721 10:47:27.077992 1603 driver.c:231] driver service terminated with signal 15
nvidia-container-cli: initialization error: driver error: timed out
[anks@xps ~]$ sudo optirun nvidia-container-cli -k -d /dev/tty info
-- WARNING, the following logs are for debugging purposes only --
I0721 10:47:56.015547 1627 nvc.c:281] initializing library context (version=1.0.2, build=0000000000000000000000000000000000000000)
I0721 10:47:56.015873 1627 nvc.c:255] using root /
I0721 10:47:56.015938 1627 nvc.c:256] using ldcache /etc/ld.so.cache
I0721 10:47:56.015977 1627 nvc.c:257] using unprivileged user 65534:65534
I0721 10:47:56.018671 1628 nvc.c:191] loading kernel module nvidia
I0721 10:47:56.019271 1628 nvc.c:203] loading kernel module nvidia_uvm
I0721 10:47:56.257544 1628 nvc.c:211] loading kernel module nvidia_modeset
I0721 10:47:56.259026 1633 driver.c:133] starting driver service
I0721 10:47:56.970591 1627 nvc_info.c:434] requesting driver information with ''
I0721 10:47:57.011734 1627 nvc_info.c:148] selecting /usr/lib/tls/libnvidia-tls.so.390.116
I0721 10:47:57.064775 1627 nvc_info.c:150] skipping /usr/lib/libnvidia-tls.so.390.116
I0721 10:47:57.065220 1627 nvc_info.c:150] skipping /usr/lib/libnvidia-tls.so.390.116
I0721 10:47:57.211320 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-ptxjitcompiler.so.390.116
I0721 10:47:57.334696 1627 nvc_info.c:150] skipping /usr/lib/libnvidia-opencl.so.430.26
I0721 10:47:57.335057 1627 nvc_info.c:150] skipping /usr/lib/libnvidia-opencl.so.430.26
I0721 10:47:57.335224 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-ml.so.390.116
I0721 10:47:57.368886 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-ifr.so.390.116
I0721 10:47:57.429287 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-glsi.so.390.116
I0721 10:47:57.429689 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-glcore.so.390.116
I0721 10:47:57.468298 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-fbc.so.390.116
I0721 10:47:57.468697 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-fatbinaryloader.so.390.116
I0721 10:47:57.520336 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-encode.so.390.116
I0721 10:47:57.585143 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-eglcore.so.390.116
I0721 10:47:57.682741 1627 nvc_info.c:150] skipping /usr/lib/libnvidia-compiler.so.430.26
I0721 10:47:57.683275 1627 nvc_info.c:150] skipping /usr/lib/libnvidia-compiler.so.430.26
I0721 10:47:57.751319 1627 nvc_info.c:148] selecting /usr/lib/libnvidia-cfg.so.390.116
I0721 10:47:57.884131 1627 nvc_info.c:148] selecting /usr/lib/libnvcuvid.so.390.116
I0721 10:47:57.887059 1627 nvc_info.c:148] selecting /usr/lib/libcuda.so.390.116
I0721 10:47:57.996433 1627 nvc_info.c:148] selecting /usr/lib/libGLX_nvidia.so.390.116
I0721 10:47:58.166121 1627 nvc_info.c:148] selecting /usr/lib/libGLESv2_nvidia.so.390.116
I0721 10:47:58.293038 1627 nvc_info.c:148] selecting /usr/lib/libGLESv1_CM_nvidia.so.390.116
I0721 10:47:58.394221 1627 nvc_info.c:148] selecting /usr/lib/libEGL_nvidia.so.390.116
I0721 10:47:59.050895 1627 nvc_info.c:148] selecting /usr/lib32/tls/libnvidia-tls.so.390.116
I0721 10:47:59.132344 1627 nvc_info.c:150] skipping /usr/lib32/libnvidia-tls.so.390.116
I0721 10:47:59.132867 1627 nvc_info.c:150] skipping /usr/lib32/libnvidia-tls.so.390.116
I0721 10:47:59.250487 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-ptxjitcompiler.so.390.116
I0721 10:47:59.298425 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-ml.so.390.116
I0721 10:47:59.331069 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-ifr.so.390.116
I0721 10:47:59.361289 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-glsi.so.390.116
I0721 10:47:59.383475 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-glcore.so.390.116
I0721 10:47:59.421817 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-fbc.so.390.116
I0721 10:47:59.461640 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-fatbinaryloader.so.390.116
I0721 10:47:59.513661 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-encode.so.390.116
I0721 10:47:59.545046 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-eglcore.so.390.116
I0721 10:47:59.567418 1627 nvc_info.c:148] selecting /usr/lib32/libnvidia-cfg.so.390.116
I0721 10:47:59.618789 1627 nvc_info.c:148] selecting /usr/lib32/libnvcuvid.so.390.116
I0721 10:47:59.791144 1627 nvc_info.c:148] selecting /usr/lib32/libcuda.so.390.116
I0721 10:47:59.952261 1627 nvc_info.c:148] selecting /usr/lib32/libGLX_nvidia.so.390.116
I0721 10:47:59.991204 1627 nvc_info.c:148] selecting /usr/lib32/libGLESv2_nvidia.so.390.116
I0721 10:48:00.065775 1627 nvc_info.c:148] selecting /usr/lib32/libGLESv1_CM_nvidia.so.390.116
I0721 10:48:00.100054 1627 nvc_info.c:148] selecting /usr/lib32/libEGL_nvidia.so.390.116
W0721 10:48:00.100494 1627 nvc_info.c:299] missing library libnvidia-opencl.so
W0721 10:48:00.100534 1627 nvc_info.c:299] missing library libnvidia-compiler.so
W0721 10:48:00.100554 1627 nvc_info.c:299] missing library libvdpau_nvidia.so
W0721 10:48:00.100571 1627 nvc_info.c:299] missing library libnvidia-opticalflow.so
W0721 10:48:00.100595 1627 nvc_info.c:303] missing compat32 library libnvidia-opencl.so
W0721 10:48:00.100651 1627 nvc_info.c:303] missing compat32 library libnvidia-compiler.so
W0721 10:48:00.100699 1627 nvc_info.c:303] missing compat32 library libvdpau_nvidia.so
W0721 10:48:00.100748 1627 nvc_info.c:303] missing compat32 library libnvidia-opticalflow.so
I0721 10:48:00.130390 1627 nvc_info.c:229] selecting /usr/bin/nvidia-smi
I0721 10:48:00.130561 1627 nvc_info.c:229] selecting /usr/bin/nvidia-debugdump
I0721 10:48:00.130664 1627 nvc_info.c:229] selecting /usr/bin/nvidia-persistenced
I0721 10:48:00.130763 1627 nvc_info.c:229] selecting /usr/bin/nvidia-cuda-mps-control
I0721 10:48:00.161181 1627 nvc_info.c:229] selecting /usr/bin/nvidia-cuda-mps-server
I0721 10:48:00.161363 1627 nvc_info.c:366] listing device /dev/nvidiactl
I0721 10:48:00.161410 1627 nvc_info.c:366] listing device /dev/nvidia-uvm
I0721 10:48:00.161439 1627 nvc_info.c:366] listing device /dev/nvidia-uvm-tools
I0721 10:48:00.161493 1627 nvc_info.c:366] listing device /dev/nvidia-modeset
W0721 10:48:00.161664 1627 nvc_info.c:274] missing ipc /var/run/nvidia-persistenced/socket
W0721 10:48:00.161754 1627 nvc_info.c:274] missing ipc /tmp/nvidia-mps
I0721 10:48:00.161801 1627 nvc_info.c:490] requesting device information with ''
I0721 10:48:00.309508 1627 nvc_info.c:520] listing device /dev/nvidia0 (GPU-af2f39e5-2912-bf8d-f068-0ed77b486849 at 00000000:01:00.0)
NVRM version: 390.116
CUDA version: 9.1
Device Index: 0
Device Minor: 0
Model: GeForce GT 540M
Brand: GeForce
GPU UUID: GPU-af2f39e5-2912-bf8d-f068-0ed77b486849
Bus Location: 00000000:01:00.0
Architecture: 2.1
I0721 10:48:00.309773 1627 nvc.c:318] shutting down library context
I0721 10:48:00.310996 1633 driver.c:192] terminating driver service
I0721 10:48:00.313694 1627 driver.c:231] driver service terminated successfully
Hi锛宒id you solve this issue? I meet the same error, could you tell how to solve it?
It seems 'nvidia container runtime' is not able to find any GPU drivers because they are hidden behind 'optirun'. We are looking at ways to circumvent optirun or use it in container-runtime. Stay tuned, or post if anyone else has figured out a hack.
Hi锛宒id you solve this issue? I meet the same error, could you tell how to solve it?
I ran into the same error, but after I ran this two shell scripts:sudo apt-get update and sudo apt-get upgrade, the error disappeared.
THX, but it seems not work for me
Look at the words in the reported Error : "... --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 ... ", I think the cuda version in your host doesn't meet the requirements in the docker image. Because (I think) "nvidia-docker run --rm nvidia/cuda nvidia-smi" will pull the latest version of nvidia/cuda image ,which requires the same version of cuda in your host. So you may upgrade your cuda or pull a proper version of the image such as "nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04" replacing "nvidia/cuda". My cuda version is cuda:9.0-cudnn7, and I also met this problem, and I used "nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04" instead of "nvidia/cuda" one by "nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 nvidia-smi", and finally it worked ! Refer to #685 and thanks.
Use the following workaround:
runtime=$(which nvidia-container-runtime)
hook=$(which nvidia-container-runtime-hook)
hook=${hook:-$(which nvidia-container-toolkit)}
sudo mv ${runtime} ${runtime}.real
sudo mv ${hook} ${hook}.real
sudo echo -e '#! /bin/sh\noptirun '${runtime}'.real $@' > ${runtime}
sudo echo -e '#! /bin/sh\noptirun '${hook}'.real $@' > ${hook}
sudo chmod +x ${runtime} ${hook}
@RenaudWasTaken Interesting solution. I cannot check myself, but please add quotes to $@ to preserve arguments with whitespaces:
sudo echo -e '#! /bin/sh\noptirun '${runtime}'.real "$@"' > ${runtime}
sudo echo -e '#! /bin/sh\noptirun '${hook}'.real "$@"' > ${hook}
@RenaudWasTaken I ran your commands and after that I could not run all commands that could be run correctly
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
Most helpful comment
THX, but it seems not work for me