I have an NVIDIA 1080ti card and running Ubuntu 17.10. Cuda 8, Cudnn 6. After compiling darknet with GPU enabled and running
./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg
I always get this
layer filters size input output
0 CUDA Error: out of memory
darknet: ./src/cuda.c:36: check_error: Assertion `0' failed.
Aborted (core dumped)
Any ideas how to fix this?
I have faced this 'out of memory' with Ubuntu 16.04 and GTX 1060 6G quite
often and used the following workaround.
BRs,
On Sun, Nov 12, 2017 at 4:58 PM, Ruben Zilibowitz notifications@github.com
wrote:
I have an NVIDIA 1080ti card and running Ubuntu 17.10. Cuda 8, Cudnn 6.
After compiling darknet with GPU enabled and running./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg
I always get this
layer filters size input output
0 CUDA Error: out of memory
darknet: ./src/cuda.c:36: check_error: Assertion `0' failed.
Aborted (core dumped)Any ideas how to fix this?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/pjreddie/darknet/issues/304, or mute the thread
https://github.com/notifications/unsubscribe-auth/AIFUl2ZJF8P7EMuGkxmjHk0hr95X-OzYks5s1weggaJpZM4Qa6T0
.
Thanks! Turns out a reboot of my computer solved it.
I have Nvidia NVS 5200M and OS Windows 10 pro 64 bit and CUDA Version 8.0. When i run darknet\x64 \darknet_web_cam_voc i am having this error :
CUDA Error: invalid device function
CUDA Error: invalid device function: No error .. what is the issue plzz reply
I am also having this error.
I am runing Darknet in a Ubuntu 16.04 LTS, with 8GB Ram and i7 CPU.
My graphics card is not powerful:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111 Driver Version: 384.111 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K600 Off | 00000000:01:00.0 On | N/A |
| 26% 55C P8 N/A / N/A | 194MiB / 973MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1194 G /usr/lib/xorg/Xorg 141MiB |
| 0 1711 G compiz 50MiB |
+-----------------------------------------------------------------------------+
I m trying to train a dataset of 50000 images 256256 in size, using Darknet 19 448448 adapting the cfg file to read smaller images and obtain 38 class output.
This error is normal? Is it because the lack of power in the GPU?
Could anyone give any clue?
Thnak you all in advance.
I resized the images and set batch and subdivision parameters to 64 & 64. Works fine, hope the output is good
In *.cfg file reduce the batch value to 64 or 32 and subdivision to 2.
@rzil Thanks man :+1:
@affian That worked for me thank you but the fps decreased. and is there any documentation on how to configure the cfg ???
sorry i am a noob.
@shaikhibrahim951 that was useful. thanks!
In ubuntu I found that the proprietary drivers work better than the opensource drivers regarding memory usage. when I use the nvidia-driver-390 (proprietary) it works. when i use the nvidia-driver-415(opensource) i get out of memory errors.
there is a binary driver, I havent tried it yet.
In *.cfg file reduce the batch value to 64 or 32 and subdivision to 2.
That worked, thanks.
In *.cfg file reduce the batch value to 64 or 32 and subdivision to 2.
I only reduce the batch value to 32 and that's it. Anyway thanks.
In *.cfg file reduce the batch value to 64 or 32 and subdivision to 2.
I only reduce the batch value to 32 and that's it. Anyway thanks.
Same for me, leave the subdiv alone, changed the batch to 32. This is on a 6GB 980Ti.
loose testing shows the following:
yolov3.cfg, batch@64(default value), subdiv@16 (default value)...out of memory at layer 90
yolov3.cfg, batch@32, [email protected] of memory at layer 3
yolov3.cfg, batch@32, [email protected] of memory at layer 16
yolov3.cfg, batch@32, [email protected] of memory at layer 90
yolov3.cfg, batch@32, subdiv@16 (default value)...success .53s
yolov3.cfg, batch@64(default value), [email protected] .23s <--- seems to be optimal with some memory headroom and good speeds/detection accuracy
yolov3.cfg, batch@64(default value), [email protected] .24s
I have solved this issue by changing subdivisions from 16 to 64 in yolo.cfg file.
@snowuyl It helped. Thanks
Turns out for me I just changed my subdivisons and batch to 64.
I resized the images and set batch and subdivision parameters to 64 & 64. Works fine, hope the output is good
hey could you let me know how we can resize the images?
Fixed the issue by changing the number of GPUs from 1 to 2 in Makefile
Most helpful comment
In *.cfg file reduce the batch value to 64 or 32 and subdivision to 2.