I'm not too familiar with node internals, so please correct any details that I may have gotten wrong.
HTTPParser objects appear to be created on a per-socket basis, and retain an incoming
reference to an IncomingMessage. The incoming
reference is kept around even after the message has been completely parsed and handled, making it un-gc-able for as long as the underlying socket remains open.
At Twitter, we've found that this can contribute to high memory usage when:
In this environment, already-handled messages are unable to be garbage collected until either a new request comes in on the existing socket or the connection is closed.
I put together a small demo of this issue at jbellenger/node-message-retention
@nodejs/http
It seems that the original comment and repro code were describing a memory leak on the server side, not on the client side. However, it seems that #9440 fixes a leak in the HTTP _client_'s implementation.
I would think that this issue is _not_ a duplicate of #9628, and is _not_ fixed by #9440. As a result, reopening it, my apologies if I'm missing something.
I don't know if it can be related, but i'm posting here.
We have tested several http libraries (request, superagent, got, node-fetch) pointing to a specific url. After running about ~20k requests in some minutes, we got this in our heapdump comparison:
HTTPPARSER
seems to be allocating more and more memory, but it's not being released after we stop sending traffic. 10 minutes after stopping to send traffic, we still were using about ~250mb per cluster while we generally use about ~80mb without using any request library.
We reloaded our app some times to do not strike the peak.
While we keep sending traffic and using a request library, the memory keeps growing and growing until the app crashes. As the url we're using has a keepalive
connection, this may be related.
Should this remain open?
I think this is still relevant. I'll take a look.
I can confirm that it's still an issue but I suspect it cannot be fixed.
The request object is stored in the state.incoming
array and bound to the resOnFinish()
callback in order to clean up outstanding request body data when the response is finalized or aborted.
What is wryly amusing is that it's a mitigation for resource leaks when people hold on to the request object but it evidently replaces one kind of leak with another.
Node V8.2.1
we stream files using multer by sending http multipart requests.
we see the same memory leak on the server , HTTPParser objects are created per request and are not released after upload is ended. each one of them consumes
the memory consumption increasing and GC is unable to release the objects.
Node v6.10.3
@iagomelanias node-fetch does not support keep-alive. https://github.com/bitinn/node-fetch#class-request.
Wondering if this happens to even non keep-alive requets ?
I see that http-parser is allocated even after load stops and after manually triggering gc on a SIGINT.
I used also chrome debug for monitoring the process memory consumption. The
HTTPParser is a memory leak for sure although a minor one. It will have
effect only after long time.
I reproduced the issue with and without keep alive connections.
Michael.
On Nov 17, 2017 09:27, "Haaris M" notifications@github.com wrote:
Node v6.10.3
@iagomelanias https://github.com/iagomelanias node-fetch does not
support keep-alive. https://github.com/bitinn/node-fetch#class-request.Wondering if this happens to even non keep-alive requets ?
I see that http-parser is allocated even after load stops and after
manually triggering gc on a SIGINT.[image: http-parser-leak]
https://user-images.githubusercontent.com/8192892/32935488-7333e672-cb96-11e7-8ec2-e1fa51b283a1.png—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/nodejs/node/issues/9668#issuecomment-345166224, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ADk6eWZ5aEk7APkabEvkYvroe3BWQtZKks5s3TVOgaJpZM4K14tN
.
Is there any solution/idea how to solve this issue yet?
Also it looks like this issue is present in:
6.13.0
8.2.1
8.9.4
9.5.0
Any updates?
@3axap4eHko 8.1x.x should be giving no issues. Try it
Still present in node 8.11.4
This is also still a problem in 10.15.3.
Does anyone have any direction for where to look to start fixing the issue?
@bnoordhuis Since you looked into this … is there an obvious reason why this doesn’t work (it passes the test suite + resolves OP’s reproduction, at the very least)?
--- a/lib/_http_server.js
+++ b/lib/_http_server.js
@@ -601,6 +601,7 @@ function resOnFinish(req, res, socket, state, server) {
assert(state.incoming.length === 0 || state.incoming[0] === req);
state.incoming.shift();
+ if (socket.parser) socket.parser.incoming = null;
// If the user never called req.read(), and didn't pipe() or
// .resume() or .on('data'), then we call req._dump() so that the
I'm able to reproduce HTTPParser leak using a simple express server and a client that terminates the socket with RST. I think it only happens on load. On the client side I used 20 threads with no delay between them, so we get 20 connections every second at the same time, and they all close with RST.
I couldn't find a way to terminate with RST with node after receiving the response (https://github.com/nodejs/node/issues/27428), so the client is python (adapted from the example on that issue).
Reproduced for me on Windows and Linux.
Now reproduced also with node only.
Reopening as the fix was reverted.
https://github.com/nodejs/node/pull/29297 should fix this and does not break the test added in #29263.
Not sure if this fix has been backported to Node 12.x too
@paolomainardi It has been released in v12.9.1.
Thanks @addaleax
Most helpful comment
Is there any solution/idea how to solve this issue yet?
Also it looks like this issue is present in:
6.13.0
8.2.1
8.9.4
9.5.0