Node: Agent Keep-Alive behavior

Created on 28 Feb 2019  路  17Comments  路  Source: nodejs/node

  • Version: v10.15.1
  • Platform: Darwin Skadi.local 18.2.0 Darwin Kernel Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
  • Subsystem: http

I've encountered a strange behavior while using Agent Keep-Alive, and I'm not sure it is correct.
I've created a code example to show what I've discovered, pay attention to the configurations of the agents.

'use strict'

const http = require('http')

const agentFalse = new http.Agent({
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: 256,
  maxFreeSockets: 256
})

const agentFalseInf = new http.Agent({
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: Infinity,
  maxFreeSockets: 256
})

const agentTrue = new http.Agent({
  keepAlive: true,
  keepAliveMsecs: 1000,
  maxSockets: 256,
  maxFreeSockets: 256
})

const server = http.createServer((req, res) => {
  console.log(req.url, req.headers)
  res.end('ok')
})

server.listen(3000, () => {
  http
    .request({ port: 3000, path: '/false', agent: agentFalse })
    .on('error', console.log)
    .end()

  http
    .request({ port: 3000, path: '/falseInf', agent: agentFalseInf })
    .on('error', console.log)
    .end()

  http
    .request({ port: 3000, path: '/true', agent: agentTrue })
    .on('error', console.log)
    .end()
})

And this is the log:

/false { host: 'localhost:3000', connection: 'keep-alive' }
/falseInf { host: 'localhost:3000', connection: 'close' }
/true { host: 'localhost:3000', connection: 'keep-alive' }

If this is the expected behavior, can you explain why it works like so?

http

All 17 comments

/cc @mcollina

I think this is buggy, and when passing keepAlive: false, maxSockets: 256, we should not be opening a keep alive connection.

cc @nodejs/http

Update: I've noted that if keepAlive is false and maxSockets is 0, the connection header has the expected value (close).

const agentFalse = new http.Agent({
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: 0,
  maxFreeSockets: 256
})

I can reproduce this behavior in all the latest releases of 6, 8, 10, and 11.

It seems this is caused by this condition https://github.com/nodejs/node/blob/8347dbefc37b8202251287d96a753606f4c397f9/lib/_http_client.js#L192

which dates back to https://github.com/nodejs/node/commit/15a5a4a9453a056b783c7ff0966e31c3c6b7d7e1.

Setting maxSockets to 0 is equivalent to using Infinity (default value).

I think this could be a nice good-first-issue. Maybe @delvedor would like to send a PR.

Despite the confusing options, I'm not sure if it is a bug, I think it is working as intended. Consider the following example:

const assert = require('assert');
const { Agent, get } = require('http');

const agent = new Agent({ keepAlive: false, maxSockets: 1 });

let socket;

function onResponse(response) {
  response.resume();
}

function onSocket(sock) {
  if (socket === undefined) {
    socket = sock;
  } else {
    assert(sock === socket);
  }
}

get({ host: 'example.com', agent }, onResponse).on('socket', onSocket);
get({ host: 'example.com', agent }, onResponse).on('socket', onSocket);

If Connection: Keep-Alive is not sent, the socket is closed, and a new one must be created for the second request. If it is sent the same socket is used and when there are no more pending requests, the socket is destroyed because the keepAlive option is false. It doesn't end in the free socket list as it would have if the keepAlive option was true.

@lpinca it seems you are implying that setting keepAlive: false has nothing to do with sending the keep alive header, but it rather reflects the internal state of the queue.

I think this is very confusing, and at least it should be documented if not changed.

Yes exactly I think the keepAlive option does not mean "send the Connection: Keep-Alive header". The documentation is accurate:

Keep sockets around even when there are no outstanding requests, so they can be used for future requests without having to reestablish a TCP connection.

Maybe we can improve it somehow.

@lpinca we are in disagreement. If the keepAlive controls if we are sending the Connection: Keep-Alive header, then it should send it _independently_ of the number of requests that are queued up. Our current behavior instead ties this up to the internal queue state, e.g. "no outstanding request" as the docs states.

In other terms, if we pass keepAlive: false, the Connection: Keep-Alive header would be sent if there is another request available to send. However this behavior is triggered only if maxSocket is set and not Infinity, and the number of maximum available sockets are filled. I think this behavior is correct.


In the example @delvedor has posted, there are more complicated combinations.

For example:

{
  keepAlive: false,
  keepAliveMsecs: 1000,
  maxSockets: 256,
  maxFreeSockets: 256
}

Is going to generate a Connection: Keep-Alive header even if there is no socket waiting, I _assume_ because there is a maxFreeSocket聽 option. This essentially overrides the keepAlive flag. From my point of view, setting both maxFreeSockets聽 and keepAlive shows a conflicting intent: the way we keep sockets free is by sending Connection: Keep-Alive which the user asks us _not_ to do, but specifying maxFreeSockets tells us that they want to do it.

I'm a bit puzzled by this issue.

Sorry forgot to add "not" in my previous comment, please re-read, I've edited it. I don't think we are in disagreement.

The example with { keepAlive: false, maxSockets: 256 } is the same example I posted above, but allowing up to 256 sockets per host.

Based on our discussion, should

const agentFalseInf = new http.Agent({
  keepAlive: false,
  maxSockets: Infinity
})

send Connection: Keep-Alive header? Note that this is the default, and it's currently sending connection: 'close'.

Based on our interpretation, should we ever send connection: 'close'?

Basically Connection: Keep-Alive is always sent, with a single exception being { keepAlive: false, maxSockets: Infinity } which makes sense as in this case the socket is never reused, so it's ok if it is closed by the server.

The keepAlive option controls whether or not to put the socket in the "freeSockets" list when there are no outstanding requests (or at least this is my understanding).

Basically Connection: Keep-Alive is always sent, with a single exception being { keepAlive: false, maxSockets: Infinity } which makes sense as in this case the socket is never reused, so it's ok if it is closed by the server.

This matches our current behavior, but is it coherent with the meaning that we have given to the keepAlive option?

Keep sockets around even when there are no outstanding requests, so they can be used for future requests without having to reestablish a TCP connection.

This states that maxSockets should not impact at all why we are sending or not Connection: Keep-Alive. Setting it to Infinity would not change the fact that there can be outstanding requests when the current one finishes.

If maxSockets is set to Infinity a new socket is created per request so there can't be outstanding requests.

Oks, so the naming generates this misunderstanding.
At this point how can I disable the Connection: keep-alive header?

@delvedor if you don't need connection pooling, don't use an agent and use the createConnection option instead. In this way Connection: close is used.

If maxSockets is set to Infinity a new socket is created per request so there can't be outstanding requests.

That's not what our docs says:

maxSockets Maximum number of sockets to allow per host. Default: Infinity.

Maybe we should update this? Considering that setting keepalive: false, maxSockets: Infinity is the default.


Possibly we should include a table in the docs to explain the different options and behaviors, and how they effect Connection: keep-alive?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

srl295 picture srl295  路  3Comments

Icemic picture Icemic  路  3Comments

stevenvachon picture stevenvachon  路  3Comments

filipesilvaa picture filipesilvaa  路  3Comments

Brekmister picture Brekmister  路  3Comments