Right now we have a connection per a service. As we implement more and more services we need more and more connections. Most of services are stateless and used sometimes, so they are just occupying a client connection. Browsers have limits on it.
We should explore whether it is possible to multiplex all services over a single web socket connection.
Things to consider:
What you ask is websocket multiplexing, and as are things right now, websockets run on TCP, while TCP runs on IP. Basically even having mutliple websockets will not make them parallel, because each TCP bound socket is already a channel from the IP protocol. All this to say that it will add some processing but it should be very minimal.
TCP: http://www.inetdaemon.com/tutorials/internet/tcp/multiplexing.shtml
Websocket multiplexing:
https://github.com/sockjs/websocket-multiplex
(+testing: https://chawlasumit.wordpress.com/category/websocket-multiplexing/)
The browser maximum connections problem:
https://stackoverflow.com/questions/985431/max-parallel-http-connections-in-a-browser
Although I wonder if we will really encounter this problem ? Shouldn't we have problems already if there was such limitations ?
You could address this issue by just having a single socket that transfers multiple message types and code on either side that routes the messages to the right place. The reason we have multiple sockets (and they are mostly all speaking json-rpc) is just that each need has implemented a separate socket rather than implementing a separate message type to be transferred over a single connection. It would still be the same amount of data transferring really, just less sockets and all the things that goes with that (file descriptors etc on the server side). This will be especially needful if there is a multiuser model planned. while it's just 1 user and their connections to the server it will be ok. Just my thoughts anyhow. I have no idea how to implement this in the code though.
websocket-multiplex already does specify a small protocol that implements what you just described: a specific message type that then routes the messages between topics (https://github.com/sockjs/websocket-multiplex#protocol)
That looks pretty good. Pub Sub and channels seems in the right direction.
That project is built for sockjs. We use native websockets.
Maybe with something like this : https://www.npmjs.com/package/websocket-multiplex-client
Although I wonder if we will really encounter this problem? Shouldn't we have problems already if there was such limitations?
Yes, but the limit could bite us eventually as we are adding new socket connections on a weekly basis. Currently, we already have over 15 + language servers. So if you open enough tabs one could reach the limit. I agree it is not imminent now, but we should keep an eye on it.
I was just wondering if there was really a limitation, based on the current websocket count we have in Theia. But multiplexing the connections would be the way to go to prevent any problem indeed.
From your link: _(it seems to use sockjs)_
var sockjsClient = new sockjs_client("http://127.0.0.1:8088/multiplex",
null,
{ rtt: 201 });
Maybe we will have to implement our own little mutiplex/routing mechanism if you want to avoid SockJS...
I think it supports websocket/ws also
https://github.com/manuelstofer/websocket-multiplexer/tree/master/examples/ws
It would be nice if we also look into using one ws sever per a server instead of pretending that we have multiple as now: https://github.com/theia-ide/theia/blob/e41a77e57de7f5948e3c61cfe5f5f724b6e2fac1/packages/core/src/node/messaging/connection.ts#L49
Regarding multiplexing: We can consider to make it work on JSON-RPC level.
You can see the limits reached by opening 18 abs in Chrome. With secure websockets it seems happens earlier (I don't know why). Also on initial load all websocket requests are queued up being resolved one after the other which delays the startup time unnecessarily.
Most helpful comment
You can see the limits reached by opening 18 abs in Chrome. With secure websockets it seems happens earlier (I don't know why). Also on initial load all websocket requests are queued up being resolved one after the other which delays the startup time unnecessarily.