It appears that when I hit 99 open kernels, I repeatedly get the error:
[E 14:01:28.963 NotebookApp] Exception in callback (<socket.socket fd=6, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('127.0.0.1', 9705)>, <function wrap.<locals>.null_wrapper at 0x7f6b41c5d268>)
Traceback (most recent call last):
File "/ebio/abt3_projects/software/miniconda3/lib/python3.6/site-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/ebio/abt3_projects/software/miniconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper
File "/ebio/abt3_projects/software/miniconda3/lib/python3.6/site-packages/tornado/netutil.py", line 264, in accept_handler
File "/ebio/abt3_projects/software/miniconda3/lib/python3.6/socket.py", line 205, in accept
This error was being generated every ~0.5 sec, so the number of errors quickly built up.
I'm using jupyter 1.0.0 py36_0 conda-forge
$ conda info
Current conda install:
platform : linux-64
conda version : 4.3.29
conda is private : False
conda-env version : 4.3.29
conda-build version : not installed
python version : 3.6.3.final.0
requests version : 2.13.0
root environment : /ebio/abt3_projects/software/miniconda3 (writable)
default environment : /ebio/abt3_projects/software/miniconda3
envs directories : /ebio/abt3_projects/software/miniconda3/envs
/ebio/abt3/nyoungblut/.conda/envs
package cache : /ebio/abt3_projects/software/miniconda3/pkgs
/ebio/abt3/nyoungblut/.conda/pkgs
channel URLs : https://conda.anaconda.org/bioconda/linux-64
https://conda.anaconda.org/bioconda/noarch
https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.continuum.io/pkgs/main/linux-64
https://repo.continuum.io/pkgs/main/noarch
https://repo.continuum.io/pkgs/free/linux-64
https://repo.continuum.io/pkgs/free/noarch
https://repo.continuum.io/pkgs/r/linux-64
https://repo.continuum.io/pkgs/r/noarch
https://repo.continuum.io/pkgs/pro/linux-64
https://repo.continuum.io/pkgs/pro/noarch
https://conda.anaconda.org/leylabmpi/linux-64
https://conda.anaconda.org/leylabmpi/noarch
https://conda.anaconda.org/r/linux-64
https://conda.anaconda.org/r/noarch
https://conda.anaconda.org/qiime2/linux-64
https://conda.anaconda.org/qiime2/noarch
config file : /ebio/abt3/nyoungblut/.condarc
netrc file : None
offline mode : False
user-agent : conda/4.3.29 requests/2.13.0 CPython/3.6.3 Linux/4.4.67 debian/stretch/sid glibc/2.23
UID:GID : 6354:350
Probably related: https://github.com/zeromq/pyzmq/issues/1170
I am getting this issue with as few as 8-10 notebooks open. All very small.
Also getting this error with only a few notebooks open.
Same issue when open 10+ notebooks, I got zmq.error.ZMQError: Too many open files and OSError: [Errno 24] Too many open files on macOS.
Same issue here. I open 2 notebooks and after some arbitrary amount of time, this error always shows up:
OSError: [Errno 24] Too many open files
Exception in callback BaseAsyncIOLoop._handle_events(6, 1)
handle: <Handle BaseAsyncIOLoop._handle_events(6, 1)>
Traceback (most recent call last):
File "/home/henrique/miniconda3/envs/my_jupyterlab/lib/python3.7/asyncio/events.py", line 88, in _run
File "/home/henrique/.local/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 122, in _handle_events
File "/home/henrique/.local/lib/python3.7/site-packages/tornado/stack_context.py", line 300, in null_wrapper
File "/home/henrique/.local/lib/python3.7/site-packages/tornado/netutil.py", line 249, in accept_handler
File "/home/henrique/miniconda3/envs/my_jupyterlab/lib/python3.7/socket.py", line 212, in accept
➜ ~ conda info
active environment : my_jupyterlab
active env location : /home/henrique/miniconda3/envs/my_jupyterlab
shell level : 1
user config file : /home/henrique/.condarc
populated config files : /home/henrique/.condarc
conda version : 4.6.14
conda-build version : not installed
python version : 3.7.3.final.0
base environment : /home/henrique/miniconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/linux-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/henrique/miniconda3/pkgs
/home/henrique/.conda/pkgs
envs directories : /home/henrique/miniconda3/envs
/home/henrique/.conda/envs
platform : linux-64
user-agent : conda/4.6.14 requests/2.21.0 CPython/3.7.3 Linux/4.15.0-54-generic ubuntu/16.04.6 glibc/2.23
UID:GID : 1000:1000
netrc file : None
offline mode : False
After running ulimit -n 4096, so far, it has not crashed again. Was set to 1024 by default.
FYI: I think we determined that when this happens in Jupyter Lab (maybe the same problem?) the cause is jupyter opening up every font file on the system. Maybe someone over here knows what to do with that info?
Hello,
We are facing the exact same error on Ubuntu running Jupyterlab 1.0.4 and Jupyterhub both 0.9.6 and 1.0.0 giving the same errors after a few notebooks are started.
OSError: [Errno 24] Too many open files
Exception in callback BaseAsyncIOLoop._handle_events(6, 1)
handle: <Handle BaseAsyncIOLoop._handle_events(6, 1)>
Traceback (most recent call last):
File "/usr/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/usr/local/lib/python3.6/dist-packages/tornado/platform/asyncio.py", line 138, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/netutil.py", line 260, in accept_handler
File "/usr/lib/python3.6/socket.py", line 205, in accept
OSError: [Errno 24] Too many open files
We already tried raising the ulimit from 40000 to 500000 without any luck ..
ulimit -n
500000
If you are still seeing this in 1.0.4, can you please use lsof to try to narrow down where the issue is coming from? For example, see https://github.com/jupyterlab/jupyterlab/issues/6727#issuecomment-515706397. Also, can you try in a fresh clean environment with no extensions and only minimal dependencies (e.g., conda create -n jlab3748 -c conda-forge jupyterlab=1.0.4)
Also, the JupyterLab issue for this discussion is at https://github.com/jupyterlab/jupyterlab/issues/6727. However, if you are seeing this in the classic notebook as well (not JLab), then it's probably not a JupyterLab issue.
We're having the same issue here.
jupyter --version
jupyter core : 4.5.0
jupyter-notebook : 6.0.0
qtconsole : not installed
ipython : 7.7.0
ipykernel : 5.1.2
jupyter client : 5.3.1
jupyter lab : 1.0.9
nbconvert : 5.6.0
ipywidgets : not installed
nbformat : 4.4.0
traitlets : 4.3.2
I am writing a python client that executes jupyter notebook. I notice that after each start and stop kernel the sockets (file descriptors) being are leaked which eventually leads to "too many files open error" in the jupyter server.
Starting jupyter server
jupyter server --log-level=DEBUG
Starting a python kernel
curl -X POST "http://localhost:8888/api/kernels
Then stopping the kernel
curl -X DELETE "http://localhost:8888/api/kernels/<kernel_id
Using lsof to list open files lsof -p <p_id | wc -l
After each start and stop kernel action, output of lsof -p <p_id | wc -l increases by 8.
python3.7 59227 mac-user 46u unix 0x51b57e97e7cf0d5d 0t0 ->0x51b57e97e7cf15f5
python3.7 59227 mac-user 47u unix 0x51b57e97e7cf15f5 0t0 ->0x51b57e97e7cf0d5d
python3.7 59227 mac-user 48u unix 0x51b57e97e7cf2595 0t0 ->0x51b57e97e7cf2725
python3.7 59227 mac-user 49u unix 0x51b57e97e7cf2725 0t0 ->0x51b57e97e7cf2595
python3.7 59227 mac-user 50u KQUEUE count=0, state=0xa
python3.7 59227 mac-user 51u unix 0x51b57e97e7cf0eed 0t0 ->0x51b57e97e7cf2e2d
python3.7 59227 mac-user 52u unix 0x51b57e97e7cf2e2d 0t0 ->0x51b57e97e7cf0eed
python3.7 59227 mac-user 53u KQUEUE count=0, state=0xa
(jupyter_server) f01898783380$ conda list
# packages in environment at /Users/npalania/opt/miniconda2/envs/jupyter_server:
#
# Name Version Build Channel
appnope 0.1.0 pypi_0 pypi
attrs 19.3.0 pypi_0 pypi
backcall 0.1.0 pypi_0 pypi
bleach 3.1.5 pypi_0 pypi
ca-certificates 2020.1.1 0
certifi 2020.4.5.1 py37_0
decorator 4.4.2 pypi_0 pypi
defusedxml 0.6.0 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
importlib-metadata 1.6.0 pypi_0 pypi
ipykernel 5.3.0 pypi_0 pypi
ipython 7.14.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
jedi 0.17.0 pypi_0 pypi
jinja2 2.11.2 pypi_0 pypi
jsonschema 3.2.0 pypi_0 pypi
jupyter-client 6.1.3 pypi_0 pypi
jupyter-core 4.6.3 pypi_0 pypi
jupyter-server 0.3.0 dev_0 <develop>
libcxx 10.0.0 1
libedit 3.1.20181209 hb402a30_0
libffi 3.3 h0a44026_1
markupsafe 1.1.1 pypi_0 pypi
mistune 0.8.4 pypi_0 pypi
nbconvert 5.6.1 pypi_0 pypi
nbformat 5.0.6 pypi_0 pypi
ncurses 6.2 h0a44026_1
notebook 6.0.3 pypi_0 pypi
openssl 1.1.1g h1de35cc_0
packaging 20.4 pypi_0 pypi
pandocfilters 1.4.2 pypi_0 pypi
parso 0.7.0 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pip 20.1.1 pypi_0 pypi
prometheus-client 0.7.1 pypi_0 pypi
prompt-toolkit 3.0.5 pypi_0 pypi
ptyprocess 0.6.0 pypi_0 pypi
pygments 2.6.1 pypi_0 pypi
pyparsing 2.4.7 pypi_0 pypi
pyrsistent 0.16.0 pypi_0 pypi
python 3.7.7 hf48f09d_4
python-dateutil 2.8.1 pypi_0 pypi
pyzmq 19.0.1 pypi_0 pypi
readline 8.0 h1de35cc_0
send2trash 1.5.0 pypi_0 pypi
setuptools 46.4.0 pypi_0 pypi
six 1.14.0 pypi_0 pypi
sqlite 3.31.1 h5c1f38d_1
terminado 0.8.3 pypi_0 pypi
testpath 0.4.4 pypi_0 pypi
tk 8.6.8 ha441bb4_0
tornado 6.0.4 pypi_0 pypi
traitlets 4.3.3 pypi_0 pypi
wcwidth 0.1.9 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
wheel 0.34.2 py37_0
xz 5.2.5 h1de35cc_0
zipp 3.1.0 pypi_0 pypi
zlib 1.2.11 h1de35cc_3
Referring to this issue and this issue upgrading pyzmq to 19.0.0 and notebook to 6.0.1 didn't fix the issue.
Most helpful comment
I am getting this issue with as few as 8-10 notebooks open. All very small.