Please see this code: https://gist.github.com/thorn0/0812cdb7eb12b2348337
That's what it outputs for me (Node 4.2.1 and 5.0.0, Windows 7 x64):
1.108s: Complete
121.193s: uncaughtException
Error: read ECONNRESET
at exports._errnoException (util.js:874:11)
at TCP.onread (net.js:544:26)
It's reproduced with most servers that run Microsoft IIS.
Reproducing your code here for the sake of posterity:
var http = require("http");
var domain = require("domain");
function log() {
var args = [].slice.call(arguments);
args.unshift(process.uptime()+"s:");
console.log.apply(console, args);
}
var agent = new http.Agent({
keepAlive: true,
keepAliveMsecs: 10000
});
var req = http.request({
hostname: "www.asp.net",
port: 80,
path: "/",
method: "GET",
agent: agent
}, function(res) {
res.on("data", function() {
});
res.on("end", function() {
log("Complete");
});
});
req.on("socket", function(sock) {
sock.on("close", function() {
log("socket closed");
});
});
req.on("error", function(err) {
log("an error ocurred", err);
});
process.on('uncaughtException', function(err) {
log("uncaughtException");
console.error(err.stack);
process.exit();
});
req.end();
setTimeout(function() {
log("done");
}, 300000);
I'm not sure what you think keepAlive and keepAliveMsecs do but they enable TCP (not HTTP) keep-alive on your side. It's completely transparent to the remote end (the IIS server). Most HTTP servers will disconnect after a period of inactivity, hence the ECONNRESET.
I don't see a bug here. I'll close the issue.
I'm really confused. https://nodejs.org/api/http.html states the opposite:
If you opt into using HTTP KeepAlive, you can create an Agent object with that flag set to true.
Maybe I should have said: keepAlive in conjunction with keepAliveMsecs. The latter sets the TCP keep-alive interval, the former tells node to reuse the socket when it can. The point is though that regardless of using TCP or HTTP keep-alive on your end, the remote end can still elect to close the connection.
If you think the documentation is unclear, can you file an issue for that and suggest improvements or send a pull request?
I still can't get it. http.Agent encapsulates all the other parts of the socket pool management logic. Why doesn't it catch this ECONNRESET at that?
Because ECONNRESET is an exceptional error, it means the remote end forcibly closed the connection. If the agent swallowed the error you couldn't distinguish an empty reply from unclean termination, for example.
An empty reply to what? The discussed exception happens when the socket is waiting in the pool and isn't used for a request.
With all due respect, that's not very clear from your original message. I'll reopen the issue, pull requests welcome.
For whoever is picking this up, it's probably worth mentioning that the naive solution of attaching an error listener in the agent's 'free' event listener introduces a race window in lib/_http_client.js between the removal of the socket's 'error' event listener in responseOnEnd() and returning it to the pool in emitFreeNT().
Hello,
I am facing the same issue in signalr which uses long polling and my program stops in 2 mins. This was working fine before node.js upgrade from 0.12.6 to 4.1.2
I think it will be nice if the Http Agent had a socket event for every socket created. If I understand correctly the issue, the server can close the connection at any time, being the socket free or not.
No, this issue is only about free (idle) sockets.
Is this something that can be patched for the LTS?
@daniel-white it should be, my bad. Tagged it with lts-watch-v4.x now /cc @nodejs/lts
@indutny :+1: thanks!
Any word on when this will be landing in v4.x?
@mikemorris I don't want to speak for the LTS team, but it has been in a stable release, and tagged with the lts watch label, so it should be in the next release (within about a week from now).
Yeah, once we get past the next weeks security release it should be good to go. /cc @TheAlphaNerd
It looks like I'm seeing the same problem on 0.12.10, does that make sense? I didn't see this on 0.10.41 but it showed up when I upgraded to 0.12.
From reading this thread my understanding is that the underlying issue is that socket connections with HTTP keep-alive are being closed by the remote host but the error handler isn't being cleaned up. Is that right?
@mikemorris looks like this fix shipped with 4.4.0 (and 5.4.0).
Hi,
we are also getting an ECONNRESET error. I'm not sure if this is the same bug?
Version: 4.2.2
@aymeba No. This issue is only about idle sockets in the pool. It's not about sockets assigned to requests (your case)..
@thorn0 thanks. Is there a known issue about requests?
@aymeba Why do you think it's a bug in Node?
@thorn0 I'm not sure if it is a bug, therefore I am asking whether there is a known issue about such cases? Because we setup everything in our info such ulimit etc. But it happens every 15 seconds periodic on different API calls
@aymeba I don't know about such known issues. ECONNRESET is a normal error if you have network troubles.
Thank you @TheAlphaNerd and whole Node team for release this fix in 4.x branch.
Is this going to be ported into the 0.12.X build by chance? I think we are being bitten by this with npm. See the following issue https://github.com/npm/registry/issues/10.
@nodesocket v0.12 is in maintenance mode now, meaning that it gets security fixes but not much more. Though if you or someone else is willing to work on the backport, we'll certainly review it.
I've got also ECONNRESET error. But it is really weird.
It was working well. After a week, one of my client asked some update, and it doesn't work suddenly.
I didn't change the project~
It was calling google place api using axios, also I tried node-fetch as well.
Both of them give above error.
What's the issue?
Is this because of Node?
Most helpful comment
Thank you @TheAlphaNerd and whole Node team for release this fix in 4.x branch.