Incubator-mxnet: Memory builds up when creating size-zero NDArray in a loop

Created on 7 Mar 2019  路  6Comments  路  Source: apache/incubator-mxnet

Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.

For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io

Description

Memory builds up when creating size-zero ndarray in a loop

Environment info (Required)

----------Python Info----------
('Version      :', '2.7.15')
('Compiler     :', 'GCC 7.2.0')
('Build        :', ('default', 'May  1 2018 23:32:55'))
('Arch         :', ('64bit', ''))
------------Pip Info-----------
('Version      :', '10.0.1')
('Directory    :', '/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/pip')
----------MXNet Info-----------
('Version      :', '1.4.0')
('Directory    :', '/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/mxnet')
('Commit Hash   :', 'a03d59ed867ba334d78d61246a1090cd1868f5da')
----------System Info----------
('Platform     :', 'Linux-4.4.0-1075-aws-x86_64-with-debian-stretch-sid')
('system       :', 'Linux')
('node         :', 'ip-172-31-4-52')
('release      :', '4.4.0-1075-aws')
('version      :', '#85-Ubuntu SMP Thu Jan 17 17:15:12 UTC 2019')
----------Hardware Info----------
('machine      :', 'x86_64')
('processor    :', 'x86_64')
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping:              1
CPU MHz:               2699.984
CPU max MHz:           3000.0000
CPU min MHz:           1200.0000
BogoMIPS:              4600.09
Hypervisor vendor:     Xen
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0-31
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single kaiser fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0021 sec, LOAD: 0.6245 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0071 sec, LOAD: 0.3581 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0265 sec, LOAD: 0.0987 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0080 sec, LOAD: 0.0543 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1285 sec, LOAD: 0.1622 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.3537 sec, LOAD: 0.3427 sec.

Package used (Python/R/Scala/Julia): Python

Error Message:

If you run watch -n5 nvidia-smi, you may observe memory growth every by 2MB every few seconds.

Minimum reproducible example

import mxnet as mx
import time

count = 0
while True:
    a = mx.nd.array([], ctx=mx.gpu(0))
    a.asnumpy()
    time.sleep(0.01)
    count += 1
    if count % 1000 == 0:
        print(count)

Steps to reproduce

(Paste the commands you ran that produced the error.)

  1. Put the above code into a file called leak.py
  2. python leak.py
  3. watch -n5 nvidia-smi

What have you tried to solve it?

  1. Create non size-zero ndarray (e.g. mx.nd.array([1], ctx=mx.gpu(1))) in the loop and there is no memory builds-up issue. But the issue remains with size-zero ndarray

Related to #13951

Backend Bug Memory NDArray

All 6 comments

Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it.
Here are my recommended labels: Bug

@mxnet-label-bot update [Bug, NDArray, CUDA]

We still observe the same issue after changing context from mx.gpu(0) to mx.cpu(0).

@mxnet-label-bot update [Bug, NDArray]

@mxnet-label-bot add [Backend, Memory]

Nice catch !

@anirudh2290 Could you please reopen this? The original fix has been reverted due to test flakiness. I am working on alternative fix.

Was this page helpful?
0 / 5 - 0 ratings