Hello everyone.
Following code results in memory leak:
import * as io from 'socket.io-client';
import Bluebird from 'bluebird';
import express from 'express';
import socketIo from 'socket.io';
import http from 'http';
import data from './socket.io.json';
describe('Socket.io', () => {
it('200 thousand requests', async () => {
const limit = 200 * 1000;
// add configurations for this test Edit Configurations -> Node options -> --expose-gc (WebStorm)
setInterval(() => {
global.gc();
console.error(new Date(), process.memoryUsage());
}, 1000);
// Server
const app = express();
const server = http.createServer(app);
server.listen(20017, 'localhost');
const ioMain = socketIo.listen(server);
ioMain.sockets.on('connection', (socket) => {
socket.on('some_route', async (args) => {
return;
});
});
// Client
const socket = io.connect('ws://localhost:20017', {
transports: ['websocket'],
rejectUnauthorized: false,
query: {key: 'key'}
});
await Bluebird.delay(3 * 1000);
for (let i = 0; i < limit; i++) {
socket.emit('some_route', ['some_data', 7777, data]);
}
await Bluebird.delay(3 * 1000);
});
});
If you run this test with limit 200 thousand requests you can see memoryUsage log:
2019-08-15T07:57:26.345Z { rss: 101449728,
heapTotal: 69914624,
heapUsed: 28566952,
external: 31683 }
2019-08-15T07:57:27.345Z { rss: 91463680,
heapTotal: 69914624,
heapUsed: 27574720,
external: 20968 }
2019-08-15T07:57:28.349Z { rss: 91475968,
heapTotal: 69914624,
heapUsed: 26643376,
external: 20968 }
2019-08-15T07:57:34.580Z { rss: 1773096960,
heapTotal: 921309184,
heapUsed: 866143944,
external: 819505496 }
Or If you run this test with limit 800 thousand requests:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
<--- Last few GCs --->
[5377:0x102802800] 13083 ms: Scavenge 1396.7 (1424.6) -> 1396.2 (1425.1) MB, 2.0 / 0.0 ms (average mu = 0.155, current mu = 0.069) allocation failure
[5377:0x102802800] 13257 ms: Mark-sweep 1396.9 (1425.1) -> 1396.4 (1425.1) MB, 173.1 / 0.0 ms (average mu = 0.093, current mu = 0.028) allocation failure scavenge might not succeed
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x3b4c160dbe3d]
Security context: 0x167f40a1e6e9 <JSObject>
1: hasBinary [0x167f40c16b71] [/Users/denis/api/node_modules/has-binary2/index.js:~30] [pc=0x3b4c1617e245](this=0x167fb3f9ad81 <JSGlobal Object>,obj=0x167f2e2dd279 <Object map = 0x167f3307a4f1>)
2: hasBinary [0x167f40c16b71] [/Users/denis/api/node_modules/has-binary2/index.js:~30] [pc=0x3b4c1617e0fa](this=0...
1: 0x10003c597 node::Abort() [/usr/local/bin/node]
2: 0x10003c7a1 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
3: 0x1001ad575 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
4: 0x100579242 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
5: 0x10057bd15 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/usr/local/bin/node]
6: 0x100577bbf v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
7: 0x100575d94 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
8: 0x10058262c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
9: 0x1005826af v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
10: 0x100551ff4 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/usr/local/bin/node]
11: 0x1007da044 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
12: 0x3b4c160dbe3d
13: 0x3b4c1617e245
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
socket.io.json data you can get here:
https://pastebin.com/uUeZJe6x
socket.io and socket.io-client version:
2.2.0
I think you might have a recursion issue going on here....you emit an "action" and receive an "action". Make sure the names are different.
I think I'm experiencing this issue too, I did not have the time to test it, but it appears that if we pass an async function to the socket.on method it will keep an reference to it never freeing the memory.
I've done a huge refactor in my code, but it was most logical equivalents, and changin from promises to async/await.
The code is running fine but the memory usage is increasing very fast, I've profiled it and it was related to async_hooks, dont know much about it but it seems like internal usage.
I also see a memory leak in my code. When clients are closed, the memory being used doesn't go back down.
I've noticed the same. Our web-socket server is in k8s pod and has RAM limit 4GB and it seems like k8s kills it every 3 weeks or so because RAM consumption grows from 500MB to 4GB.
In my case, the memory bursts to 1.5GB after 30K clients (max of 3K active) get connected and transmitted the total number of 900K messages in 10 minutes. And the memory didn't get released when all clients get disconnected (it remained 1.4GB even after calling the garbage collector manually).
I tried to debug the memory leak in different ways and after a lot (4 days of debugging), find out that disabling perMessageDeflate
will fix the issue. From ws-module API docs:
The extension is disabled by default on the server and enabled by default on the client. It adds a significant overhead in terms of performance and memory consumption so we suggest to enable it only if it is really needed.
So the main question here is that why perMessageDeflate
is true by default in Socket-io?!
Hope this help others.
how do you disable perMessageDeflate in socket.io?
is this correct ?
io = require('socket.io')({
perMessageDeflate: false
});
You are missing the first argument which should be the port:
io = require('socket.io')(3000, { perMessageDeflate: false });
So the main question here is that why perMessageDeflate is true by default in Socket-io?!
@masoudkazemi that's a good question actually. You should be able to find the reasoning in the history of https://github.com/socketio/engine.io, I remember there was quite a lot of discussions about that.
I think it should be disabled by default though. Let's include it in Socket.IO v3 :+1:
Hi, I faced a similar problem and I want to share my findings.
The root cause of this seems to be _memory fragmentation_ (node issue 8871). In other words, there's no actual memory leak, but rather the memory gets allocated in such a way that RSS keeps growing while the actual heap memory keeps steady.
This means that, while disabling perMessageDeflate
will definitely help, you may hit this same issue in other parts of your application.
There's a workaround to memory fragmentation by preloading _jemalloc_ before starting you application: see nodejs/node#21973
In my case it cut initial memory footprint by half and it keeps the memory low after that.
Linking related issue: socketio/engine.io#559
Can someone test if this is still an issue in Node 14.7.0 or newer?
is this correct ?
io = require('socket.io')({ perMessageDeflate: false });
It will be correct
import http from 'http';
import express from 'express';
const app = express();
const server = http.createServer(app);
require('socket.io').listen(server, {perMessageDeflate: false});
Just wanted to say that @pmoleri's idea worked for us. I set perMessageDeflate: false
and used jemalloc via LD_PRELOAD
and we're no longer running out of memory. We're on Node 12 FTR.
I can confirm that this has a huge effect on Heroku too using this build pack: https://elements.heroku.com/buildpacks/gaffneyc/heroku-buildpack-jemalloc with perMessageDeflate: false
Most helpful comment
In my case, the memory bursts to 1.5GB after 30K clients (max of 3K active) get connected and transmitted the total number of 900K messages in 10 minutes. And the memory didn't get released when all clients get disconnected (it remained 1.4GB even after calling the garbage collector manually).
I tried to debug the memory leak in different ways and after a lot (4 days of debugging), find out that disabling
perMessageDeflate
will fix the issue. From ws-module API docs:So the main question here is that why
perMessageDeflate
is true by default in Socket-io?!Hope this help others.