I want to create http server in node that returns some cached gzipped and chunked response. The code is as follows:
const http = require('http');
const data = Buffer.from([
0x61, 0x0d, 0x0a, 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x13, 0x0d, 0x0a, 0x66, 0x0d, 0x0a, 0xcb, 0x48, 0xcd, 0xc9,
0xc9, 0x07, 0x00, 0x86, 0xa6, 0x10, 0x36, 0x05, 0x00, 0x00, 0x00,
0x0d, 0x0a, 0x30, 0x0d, 0x0a, 0x0d, 0x0a
])
const server = http.createServer((req, res) => {
res.writeHead(200, {
'Content-Type': 'text/plain',
'Content-Encoding': 'gzip',
'Transfer-Encoding': 'chunked',
'Content-Length': data.length
});
res.write(data, 'binary');
res.end(null, 'binary');
});
server.listen(4000)
data is inlined for reproduction but normally would go from some cache. It contants gzipped and chunked "hello" response. Unfortunately the server double-chunks response as you can see here:
curl localhost:4000 --raw --silent | xxd -p -l 50 | fold -w2 | while read b; do echo 0x$b,; done | tr "\n" " "
0x32, 0x38, 0x0d, 0x0a, 0x61, 0x0d, 0x0a, 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x0d, 0x0a, 0x66, 0x0d, 0x0a, 0xcb, 0x48, 0xcd, 0xc9, 0xc9, 0x07, 0x00, 0x86, 0xa6, 0x10, 0x36, 0x05, 0x00, 0x00, 0x00, 0x0d, 0x0a, 0x30, 0x0d, 0x0a, 0x0d, 0x0a, 0x0d, 0x0a, 0x30, 0x0d, 0x0a, 0x0d
The issue is not happening when cached response is not chunked:
const http = require('http');
const data = Buffer.from([0x68, 0x65, 0x6c, 0x6c, 0x6f])
const server = http.createServer((req, res) => {
res.writeHead(200, {
'Content-Type': 'text/plain',
'Transfer-Encoding': '',
'Content-Length': data.length
});
res.write(data, 'binary');
res.end(null, 'binary');
});
server.listen(4000)
curl localhost:4000 --raw --silent | xxd -p -l 50 | fold -w2 | while read b; do echo 0x$b,; done | tr "\n" " "
0x68, 0x65, 0x6c, 0x6c, 0x6f
I鈥檓 not sure if there鈥檚 a way to override this behaviour; if you want to go down to this kind of low-level-ness, I think you might be better off parsing the request using http but writing responses directly using the raw network socket?
/cc @nodejs/http
Aside: The 'binary' encoding is a legacy alias and does not what you think it does, and you can safely omit it. It will be ignored anyway because you鈥檙e already passing in a Buffer that is written to the network socket as-is.
that is written to the network socket as-is.
The issue is that it isn't :( If Transfer-Encoding is not set to empty value (is this even documented?), node chunks the response. I feel it's not good design decision that what res.write does depends on some header set. Even if, the algorithm is currently as follows:
Transfer-Encoding set to empty value? -> no chunkingBut I feel it should be:
Transfer-Encoding set to chunked? -> chunk!But that's different issue. The real issue is that I must both set Transfer-Encoding: chunked and write Buffer to the network socket as-is. Such behavior is not possible if chunking depends on value of Transfer-Encoding header...
The issue is that it isn't :( If
Transfer-Encodingis not set to empty value (is this even documented?), node chunks the response.
I think Anna is suggesting something like this:
res.writeHead(200, {
'Content-Type': 'text/plain',
'Content-Encoding': 'gzip',
'Transfer-Encoding': 'chunked',
'Content-Length': data.length
});
res.flushHeaders();
res.socket.write(data);
where data is your gzipped chunked chunk.
If I remember correctly Node.js behaviour is:
res.write() is used.res.end() is used.Transfer-Encoding header if it is specified.Which kind of makes sense.
I think this has been answered so I'll go ahead and close it out.
Got to this issue after several days tracing a bug. This should be somewhere better documented.
Most helpful comment
I think Anna is suggesting something like this:
where
datais your gzipped chunked chunk.If I remember correctly Node.js behaviour is:
res.write()is used.res.end()is used.Transfer-Encodingheader if it is specified.Which kind of makes sense.