We observed in our application that even a simple API which takes less than a millisecond to process takes around 40 milliseconds when called from a node script. Initially, we thought that it is a bug in our code but when we hit the same API from the postman, curl or any other command line tool (anything other than a node script) then it takes 4-5 ms to respond.
To prove this I have created a very simple script which initializes an HTTP server and once the server is ready it calls the heartbeat API using HTTP module. To my surprise, it takes 30+ ms every time.
Can you please tell me where this extra 30 ms are going?
The only external module which I have used is express. If you think it's an Express issue but not the Node issue then please let me know and I will repost the bug on Express.
Please find the gist for the same.
https://gist.github.com/piyushbeli/1564796ef18e13f9624b450d94f50438
Do you observe the same issue when you use the built-in http server instead of express? If yes, please post that test case. If not, you should report it to express.
EDIT: Also, is v5.2.0 a typo? You should upgrade to the latest v6 or v7.
I tried with NodeJS's built-in route also, there is a slight improvement but still not very huge. It still takes around 25 ms to respond.
Please find the gist.
https://gist.github.com/piyushbeli/244709be488e2ac487f735df861e27cf
Is there a reason you use res.write() instead of res.end()? The former doesn't end the response.
No, there is no specific reason. But res.end() also gives the same result.
I had a closer look at your test. Things it could or should do:
res.end().res.socket.setNoDelay() on the server and client.Since you're on Linux, it's quite possible you're getting bitten by delayed TCP ACKs.
you can try increasing the number of ports and reducing the timeout in TCP_WAIT state.
sysctl -w net.ipv4.ip_local_port_range="1024 64000"
sysctl -w net.ipv4.tcp_fin_timeout=45
as a last resort
sysctl -w net.ipv4.tcp_tw_reuse=1
Seeing how @piyushbeli didn't follow up, I'll go ahead and assume this is working now.
If you still wish to pursue this, please post to https://github.com/nodejs/help/issues. So far there seems to be no reason to assume it's an issue with node.js itself.
Sorry for not responding to the post for many days. I was occupied with some other work and could not get the time to try out the things you guys suggested.
@bnoordhuis, I tried to add the suggestion you provided but unfortunately, there is no improvement in the response time. I have updated my gist with your suggestions.
https://gist.github.com/piyushbeli/244709be488e2ac487f735df861e27cf
@sathvikl, I did not try the options which you provided. Can you please describe what will be the advantages of changing the above configuration? If we reduce the timt_out how will it increase the response time and what will be the effect of increasing the no of ports?
@piyushbeli Did you ever figure out an answer to your question?
@neil-119 No, I never found the answer to this question.
@bnoordhuis, Do you have more information about this issue or shall we assume that this is the problem with the NodeJS itself?
In our project, we have accepted this as a limitation of the Node framework and not working on this anymore.
EDIT: Never mind. I just figured out that the lag is caused by my proxy. NodeJS seems to be able to use system's proxy configuration by default on Windows.
Original issue:
I observed the same problem. And I think this issue is also bugging NodeJS-based tools like Postman. Sending the same post request to a https API endpoint using Postman takes 513ms while curl -X POST ... only takes 183ms. I always thought it was a Postman issue.
Today I am writing a typescript client (with the popular "request" module) to connect to the same API. And found the latency is much worse than our Java implementation (NodeJS at > 500ms and Java at <200ms). This issue only happens if I connect to the https endpoint. When I connect to http endpoint, the performance is the same.
To experiment further, I wrote a simple "https"-module-based NodeJS script to call the https endpoint. And the performance is alright. Then I copied the test script into my existing project, then it becomes very slow.
Most helpful comment
I had a closer look at your test. Things it could or should do:
res.end().res.socket.setNoDelay()on the server and client.Since you're on Linux, it's quite possible you're getting bitten by delayed TCP ACKs.