hi everyone.
There were many connections in state FIN_WAIT2 when my application run about 3 days. #1380 When state FIN_WAIT2 reach some connection limit (I don't know what boundary it is.) And other user will very difficult to connect socket server.
netstat -anl | grep 10001 | awk '/^tcp/ {t[$NF]++}END{for(state in t){print state, t[state]} }'
FIN_WAIT2 804
LISTEN 1
CLOSE_WAIT 1
TIME_WAIT 31
ESTABLISHED 192
LAST_ACK 1
FIN_WAIT1 4
My environment is ubuntu(12.04) , nodejs (v0.10.22) and socket.io (v0.9.16) in AWS EC2 small instance. I never change ubuntu tcp/ip configuration.
Is that socket problem? Or should I tune tcp/ip configuration??
thanks.
I have been solve this problem. It is OS security issue.
Ubuntu limit every user number of concurrent open file. Every socket need one file to establish connection. When reach limit(default 1024), it can't establish new socket connection anymore. So need to change number of concurrent open file limit.
sudo vim /etc/security/limits.conf
* soft nofile 1024
* hard nofile 2048
root soft nofile 4096
root hard nofile 8192
user1 soft nofile 2048
user1 hard nofile 2048
But my application is running after server start. The number of concurrent open file limit configuration was not loaded at server start.
So, I change number of concurrent open file limit manually at server start.
sudo vim /etc/rc.local
ulimit -n 8192
And it will work for over 1024 concurrent open file.
That's a good insight @kejyun, thanks !
Most helpful comment
I have been solve this problem. It is OS security issue.
Ubuntu limit every user number of concurrent open file. Every socket need one file to establish connection. When reach limit(default 1024), it can't establish new socket connection anymore. So need to change number of concurrent open file limit.
But my application is running after server start. The number of concurrent open file limit configuration was not loaded at server start.
So, I change number of concurrent open file limit manually at server start.
And it will work for over 1024 concurrent open file.