Socket.io: 1.0.0-pre "Session ID unknown"

Created on 23 Apr 2014  ·  38Comments  ·  Source: socketio/socket.io

HTTP Response
{"code":1,"message":"Session ID unknown"}

DEBUG=socket.io:*

socket.io:socket joined room 2d1DHsPAtc1Sspm6AAAE +0ms
socket.io:server incoming connection with id rwLW7JMBp6E4TyexAAAG +3s
socket.io:client connecting to namespace / +3s
socket.io:namespace adding socket to nsp / +3s
socket.io:socket socket connected - writing packet +3s
socket.io:socket joining room rwLW7JMBp6E4TyexAAAG +0ms
socket.io:client writing packet {"type":0,"nsp":"/"} +1ms
socket.io:client writing packet {"type":2,"data":["news",{"hello":"world"}],"nsp":"/"} +0ms
socket.io:socket joined room rwLW7JMBp6E4TyexAAAG +1ms
socket.io:server incoming connection with id 03Izy4YOxaYm5-d-AAAH +1s
socket.io:client connecting to namespace / +1s
socket.io:namespace adding socket to nsp / +1s
socket.io:socket socket connected - writing packet +1s
socket.io:socket joining room 03Izy4YOxaYm5-d-AAAH +0ms
socket.io:client writing packet {"type":0,"nsp":"/"} +1ms
socket.io:client writing packet {"type":2,"data":["news",{"hello":"world"}],"nsp":"/"} +0ms

And socket is disconnected immediately

Most helpful comment

for me, it was with nginx ssl http2, and it was polling, so the good config is:

 const ioSocket = io('', {
      // Send auth token on connection, you will need to DI the Auth service above
      // 'query': 'token=' + Auth.getToken()
      path: '/socket.io',
      transports: ['websocket'],
      secure: true,
    });

All 38 comments

Thanks for reporting @ceram1. Can you repeat this problem with 1.0.2? If so, it would be interesting to see the client side debug logs too and/or get some sample code to reproduce the issue. Would help a lot. :)

Ah.. sorry... The mail was in spam... I'll test it soon

+1
We've noticed this under heavy connection rate on a 2 node cluster (backed by socket.io-redis).

Hi guys,
socket.io 1.0.6
socket.io-redis

response:
{"code":1,"message":"Session ID unknown"}

just with more than 1 instance on Heroku.
With one instance everything works fine (we love it) with 2 or more instances it fails. 400 error.

Initialisation:
io = require('socket.io')(compound.server);

io.adapter(redisSocket({
    host: redisHost
    , port: redisPort
    , pubClient: pub
    , subClient: sub
}));

sub = redis.createClient(redisPort, redisHost,{return_buffers:true});
pub = redis.createClient(redisPort, redisHost,{return_buffers:true});
client = redis.createClient(redisPort, redisHost,{return_buffers:true});

If you want, I can share you screen via skype and we can go through it. It's all over the internet, this error

Something new here??

I'm actually facing the same issue right now. Everything runs fine in one instance, 400 errors occurred when there are more than one instances running.

ps. Mine is running at version 1.0.6.

+1

Okay guys if you want to run cluster, the easiest way is move forward to the library call socketcluster.
It's based on engine.io though, but it's close enough to socket.io
https://github.com/TopCloud/socketcluster

+1 , if I set more than one instances after ELB , I will meet this problem.

+1

Hmm.. I can't make this error again.. (I don't have source for it as some test failed)
Maybe environment settings can be a reason.

I know whats the problem. All the requests like pooling is everytime calling on the same instance by port.
But in my case it's not working because i'm on heroku. And heroku does not support acces to instance by port. It always chose the instance by its own load balancer.
Problem is here:
socket.io/node_modules/engine.io/lib/server.js on function "verify" there is variable "clients" which exists separately on each instance. So if request comes to different instance it returns 400.

Ah.. I didn't clustered in test code. Maybe it's why I can't make it again

Here is another solution to fixed this problem . However , from most use cases to see , this solution is not good ...

The best solution I have come up with is to give each instance (socket server) a different port and store the address of the instance with the number of clients connected to it in redis. Then have the client ask the a separate node app (socket manager) for a socket server address before connecting or reconnecting. The socket manager then picks the socket.io server with the least connections. It's not pretty but it does work and it can be scaled.

https://github.com/Automattic/socket.io/issues/1636

Just to say, had same problem. Then I figured that I am poxying request trough nginx. Added appropriate nginx config, and all was well after that.

http://socket.io/docs/using-multiple-nodes/

You need ip_hash in upstream server definition and some headers.

@dux That worked on Heroku? I don't think we can change the nginx configuration.

ups, I did not know this is heroku exclusive thread. No, this is vanilla nginx frontend server solution.

If someone could find solution for HEROKU, please post it here it'll be very helpful :)

@miklacko after lots of tries and errors I ended with this configuration (bare in mind that I hadn't had the time to test if the messages were shared between the browsers. But what I could test is that I have an stable connection between the Client and the server with a clustered express 4.x and socket 1.x.

express.js

var sio_redis = require('socket.io-redis'),
    url = require('url'),
    http = require('http'),
       // other imports... 

// Some express set up

// CookieParser should be above session
    app.use(cookieParser());

    var sessionStore = new mongoStore({
        db: db.connection.db,
        collection: config.sessionCollection
    });
    // Express MongoDB session storage
    var sessionMiddleware = session({
        secret: config.sessionSecret,
        store: sessionStore
    });

    app.use(sessionMiddleware);

    // Create Server and Socket.io
    var server = http.createServer(app);
    var sio = require('socket.io')(server);

    if(process.env.REDISCLOUD_URL){
        var redisURL = url.parse(process.env.REDISCLOUD_URL);
        redisURL.password = redisURL.auth.split(':')[1];
        var pub = require('redis').createClient(redisURL.port, redisURL.hostname, {auth_pass: redisURL.password, return_buffers: true});
        var sub = require('redis').createClient(redisURL.port, redisURL.hostname, {auth_pass: redisURL.password, return_buffers: true});
        sio.adapter(sio_redis({pubClient: pub, subClient: sub}));
    } else {
        sio.adapter(sio_redis({ host: 'localhost', port: 6379 }));
    }

    // Authenticate Socket.io using Cookie Auth
    sio.use(function(socket, next) {
        var handshake = socket.handshake;

        if (handshake.headers.cookie) {
            var req = {
                headers: {
                    cookie: handshake.headers.cookie
                }
            };

            cookieParser(config.sessionSecret)(req, null, function(err) {
                if (err) {
                    return next(err);
                }
                var sessionID = req.signedCookies['connect.sid'] || req.cookies['connect.sid'];
                sessionStore.get(sessionID, function (err, session) {
                    if (err) {
                        return next(err);
                    }

                    if (session) {
                        next();
                    } else {
                        return next(new Error('Invalid Session'));
                    }
                });
            });
        } else {
            next(new Error('Missing Cookies'));
        }
    });

// other express middleware set up
          // Handle Connection and disconnection
         sio.sockets.on('connection', function(socket) {
        console.log('user connected');
        socket.on('disconnect', function(){
            console.log('user disconnected');
        });
    });

    return server;

server.js (entire file)

'use strict';
/**
 * Module dependencies.
 */
var init = require('./config/init')(),
    config = require('./config/config'),
    mongoose = require('mongoose'),
    cluster = require('cluster'),
    _ = require('lodash');


// Bootstrap db connection
var db = mongoose.connect(config.db);

// Init the express application
var server = require('./config/express')(db);

// Bootstrap passport config
require('./config/passport')();

// Start server.
var numCPUs = Math.ceil(require('os').cpus().length / 2);
if (cluster.isMaster) {
    var workers = [];

    // Helper function for spawning worker at index 'i'.
    var spawn = function(i) {
        workers[i] = cluster.fork();
        console.log('worker ' + workers[i].process.pid + ' created');

        // Optional: Restart worker on exit
        // workers[i].on('exit', function(worker, code, signal) {
        //  console.log('respawning worker', i);
        //  spawn(i);
        // });

        cluster.on('exit', function(worker, code, signal) {
            console.log('worker ' + worker.process.pid + ' died');
        });
    };

    // Spawn workers.
    for (var i = 0; i < numCPUs; i++) {
        spawn(i);
    }
} else {
    server.listen(config.port, function () {
        console.log('server started on ' + config.port + ' port');
    });
}

index.html (the most important part here)

<script src="/socket.io/socket.io.js"></script>
    <script>
        var socket = io.connect({transports: ['websocket']});
    </script>

We need to state to only use websocket protocol with Heroku. Because xhr-polling do not work! Idk why yet.

I hope today I can test the communication between browsers thought socket/redis. I will share the outcome of that.

@gonzalodiaz Thank you very much ! I'll try it next week and i will let you know if i'll find some errors or improvements.

@gonzalodiaz its perfectly working. Only bad thing is that old browsers doesn't support websockets http://caniuse.com/#feat=websockets.

Thank you again.

@miklacko glad to know it worked for you. it worked nicely for me as well :) - Let me know if you find a solution for old browsers. It's not an issue for me right now but it would be nice to have that backup.

this problem happened when the client use long polling transport , and the server use the cluster mode.

use sticky session will be fixed this problem.

@nauu Heroku doesn't support sticky sessions.

The Heroku routing infrastructure does not support “sticky sessions”. 
Requests from clients will be distributed randomly to all dynos running your application.

https://devcenter.heroku.com/articles/java-faq

@gonzalodiaz does Heroku can running cluster and listen in different port ?

if it can do that , u can use the nginx to reverse proxy the request from clients. and use nginx 's sticky session module.

@nauu No, Heroku doesn't support listening different port. Heroku sets THE port in a env variable and internally it distributes the load between the dynos. It's really a pain.

So regarding this bug on Heroku, there is no solution if you want to use something like cluster that supports all of socket.io not just web sockets?

Thanks!

You can simulate heroku locally this way:

1- npm install foreman -g
2- echo "web: node app.js" > Procfile
3- nf start -x 3000 web=3

This will start 3 instances of your server (app.js) on port 5001, 5002, and 5003 but you can reach them through the reverse proxy on port 3000.

Launch this and try to establish a websocket connection and you will see the same issue as on heroku. node-foreman is a good way to simulate heroku's environment locally. The important thing to keep in mind that, although you can access each local instance directly you only have access to the load balancer on Heorku (port 3000 in the example above)

what is a good solution?

@ceram1 You solve the problem yet?

+1

@gonzalodiaz Thank you for mentioning transports: ['websocket']

For those who are having this issue behind a amazon ELB, make sure you enable application-controlled session stickiness (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html). This solved the problem for me.

Guys read this article https://devcenter.heroku.com/articles/node-websockets
In the end you will find Apps using Socket.io should enable session affinity
Then proceed to https://devcenter.heroku.com/articles/session-affinity
Then put in Heroku Toolbelt CLI heroku features:enable http-session-affinity --app APP
And I'm happy with working socket.io on scaled dynos.

for me, it was with nginx ssl http2, and it was polling, so the good config is:

 const ioSocket = io('', {
      // Send auth token on connection, you will need to DI the Auth service above
      // 'query': 'token=' + Auth.getToken()
      path: '/socket.io',
      transports: ['websocket'],
      secure: true,
    });

@p3x-robot I was fixed by this method

Was this page helpful?
0 / 5 - 0 ratings