Here's the code with a memory leak:
var b = new Buffer(10000000);
for (var i = 0; i < b.length; i++) {
b[i] = Math.random();
}
for (var i = 0; i < 1000; i++) {
console.log('#' + i, mem());
var out = new stream.PassThrough();
var gzip = zlib.createGzip();
gzip.pipe(out);
gzip.write(b);
gzip.end();
}
setInterval(function() { console.log(mem()); }, 1000);
function mem() {
global.gc();
return Math.round(process.memoryUsage().rss / 1024 / 1024) + 'M';
}
Looks like the code is ok, gzip streams are closed. The same happens with file streams, not only PassThrough. After compression is finished, memory usage is about 100MB and it doesn't decrease anymore.
doesn't decrease anymore.
_Note: RSS not decreasing is not an issue on its own._
The memory is consumed in node::ZCtx::Init(v8::FunctionCallbackInfo<v8::Value> const&).
As far as I can tell there is no leak. The heap does grow a bit, but after the zlib objects get GC'ed, it goes down (~27mb peak to ~8MB after). Also valgrind shows no real leaks.
@mscdex, could you share your output?
@ChALkeR what output are you referring to?
@mscdex I meant the memory measurements, because I thought that you were seing something other than I do.
The heap is not the problem here, it looks like there are no uncollected objects on the js side related to the issue.
Something seems to be wrong on the c++ side, though.
@ChALkeR Well as far as the C++ side goes, the ZCtx instances are being destroyed (destructors are called) and the destructor calls deflateEnd() (via Close()) to free zlib's own dynamic memory allocations. AFAIK valgrind should have shown a large number of lost or reachable bytes, but it doesn't. All of the small amounts of lost/reachable memory are all non-zlib-related.
@mscdex I don't yet know what goes wrong, but valgrind --tool=massif suggests that something _does_ go wrong:

_Notice: the above graph was taken for 100 iterations, while the original testcase has 1000._
The memory usage never stops linear growth, btw. E.g.: #9999 2690M.
@ChALkeR when it prints #XXXX YYYM, it's expected that the memory is still consumed because gzip is async and finish event has not fired yet. I've tested it once again on another machine and it looks like after finishing all iterations memory (heapTotal and heapUsed) goes back to normal, except rss value. So the issue may be closed I think.
heap goes to normal, rss isn't expected to go down in general, but the graph above still doesn't look good to me…
Btw, your Math.random() code in fact generates a zero-filled Buffer.
Nah, that memory is reused later.
If you limit the number of concurrent zlib.createGzip() streams, memory usage remains constant no matter how many ones you create in total.
I don't see any issue here, @mscdex was right.
Closing this since everyone agrees that there is no problem here =).
Most helpful comment
Closing this since everyone agrees that there is no problem here =).