Ws: About the server performance

Created on 11 Aug 2019  路  10Comments  路  Source: websockets/ws

  • [x] I've searched for any related issues and avoided creating a duplicate
    issue.

Description

Hi @lpinca,

I was looking at the different options out there to setup a websocket server on node and I found this benchmark (that looks quite outdated though):

image

So I wanted to know if you have some (updated) numbers to help me choosing a server.

Thanks

Most helpful comment


server.js

'use strict';

const WebSocket = require('ws');
const { createServer } = require('http');

const server = createServer();
const wss = new WebSocket.Server({
  perMessageDeflate: false,
  clientTracking: false,
  server
});

wss.on('connection', function (ws) {
  ws.on('message', function (message) {
    ws.send(message);
  });
  ws.on('close', function () {
    server.close();
  });
});

let options = { port: 8080 };
let message = `server listening on port ${options.port}`;

if (process.argv[2] === '--ipc') {
  message = 'server listening on /tmp/ws.sock';
  options = { path: '/tmp/ws.sock' };
}

server.listen(options, function () {
  console.log(message);
});


client.js

'use strict';

const Sender = require('ws/lib/sender');
const { get } = require('http');
const { randomBytes } = require('crypto');

const bufs = Sender.frame(randomBytes(64), {
  fin: true,
  rsv1: false,
  opcode: 2,
  mask: true,
  readOnly: false
});
const frame = Buffer.concat(bufs);

const options = {
  headers: {
    Connection: 'Upgrade',
    Upgrade: 'websocket',
    'Sec-WebSocket-Key': randomBytes(16).toString('base64'),
    'Sec-WebSocket-Version': 13
  }
};

if (process.argv[2] === '--ipc') {
  options.socketPath = '/tmp/ws.sock';
} else {
  options.port = 8080;
}

const request = get(options);

request.on('upgrade', function (response, socket) {
  let count = 0;
  let bytesRead = 0;

  socket.on('data', function (chunk) {
    bytesRead += chunk.length;

    if (bytesRead === 66) {
      bytesRead = 0;
      count++;

      if (count === 100000) {
        console.timeEnd('100000 * 64');
        socket.end();
      } else {
        socket.write(frame);
      }
    }
  });

  console.time('100000 * 64');
  socket.write(frame);
});

$ node client.js
100000 * 64: 3.757s
$ node client.js --ipc
100000 * 64: 2.055s

All 10 comments

My suggestion is to run your own benchmarks and choose accordingly. For very small messages of just a few bytes ws is indeed worse than some other existing implementations. That difference shrinks as the message size grows.

I will probably use uWS for the server side then, as I have a lot of small messages to pass. And ws for the client part. Hoping everything works smoothly since they both pass the same test suite.

Thanks!

For what is worth the bottleneck is not ws but Node.js net.Socket, also if you use a UNIX domain socket instead of a TCP socket on the Node.js side (this is viable if you use a reverse proxy like NGINX) it is almost twice as fast.

Oh that's worth investigating, thanks a lot!

For what is worth the bottleneck is not ws but Node.js net.Socket, also if you use a UNIX domain socket instead of a TCP socket on the Node.js side (this is viable if you use a reverse proxy like NGINX) it is almost twice as fast.

After almost a year ago, is this still true? Just asking because I'm curious whether it would make sense to set up an HTTP server on a domain socket and use some sort of domain socket -> IP proxy.


server.js

'use strict';

const WebSocket = require('ws');
const { createServer } = require('http');

const server = createServer();
const wss = new WebSocket.Server({
  perMessageDeflate: false,
  clientTracking: false,
  server
});

wss.on('connection', function (ws) {
  ws.on('message', function (message) {
    ws.send(message);
  });
  ws.on('close', function () {
    server.close();
  });
});

let options = { port: 8080 };
let message = `server listening on port ${options.port}`;

if (process.argv[2] === '--ipc') {
  message = 'server listening on /tmp/ws.sock';
  options = { path: '/tmp/ws.sock' };
}

server.listen(options, function () {
  console.log(message);
});


client.js

'use strict';

const Sender = require('ws/lib/sender');
const { get } = require('http');
const { randomBytes } = require('crypto');

const bufs = Sender.frame(randomBytes(64), {
  fin: true,
  rsv1: false,
  opcode: 2,
  mask: true,
  readOnly: false
});
const frame = Buffer.concat(bufs);

const options = {
  headers: {
    Connection: 'Upgrade',
    Upgrade: 'websocket',
    'Sec-WebSocket-Key': randomBytes(16).toString('base64'),
    'Sec-WebSocket-Version': 13
  }
};

if (process.argv[2] === '--ipc') {
  options.socketPath = '/tmp/ws.sock';
} else {
  options.port = 8080;
}

const request = get(options);

request.on('upgrade', function (response, socket) {
  let count = 0;
  let bytesRead = 0;

  socket.on('data', function (chunk) {
    bytesRead += chunk.length;

    if (bytesRead === 66) {
      bytesRead = 0;
      count++;

      if (count === 100000) {
        console.timeEnd('100000 * 64');
        socket.end();
      } else {
        socket.write(frame);
      }
    }
  });

  console.time('100000 * 64');
  socket.write(frame);
});

$ node client.js
100000 * 64: 3.757s
$ node client.js --ipc
100000 * 64: 2.055s

$ node client.js
100000 * 64: 3.757s
$ node client.js --ipc
100000 * 64: 2.055s

Is there a similar gain if what you have behind nginx's proxy is the server?

I did not test it but yes I think it's possible.

Wouldn't nginx simply front the incoming client tcp connection with a tcp socket and then forward message over a Unix socket to the back end server?

Client -----tcp socket---------> nginx reverse proxy --------unix socket--------> back end server

How would thst be faster than this,:
Client --------tcp socket------> back end

Wouldn't nginx simply front the incoming client tcp connection with a tcp socket and then forward message over a Unix socket to the back end server?

Client -----tcp socket---------> nginx reverse proxy --------unix socket--------> back end server

How would thst be faster than this,:
Client --------tcp socket------> back end

It can't, the proxy will always add a latency penalty, my question was about unix vs tcp socket between nginx proxy and server. That is. There're reasons out there that make the proxy a convenience.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

HanHtoonAung picture HanHtoonAung  路  3Comments

cra0kalo picture cra0kalo  路  3Comments

dcflow picture dcflow  路  4Comments

NodePing picture NodePing  路  5Comments

sherikapotein picture sherikapotein  路  3Comments