Notebook: iopub_data_rate_limit

Created on 2 Nov 2017  路  10Comments  路  Source: jupyter/notebook

I ran into this error:

IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`.

While trying to run tfidfvectorizer like so:

lemmas = TIP_with_rats['s_lemmas_IP'].apply(lambda x: ' '.join(x))
vect = sklearn.feature_extraction.text.TfidfVectorizer()
features = vect.fit_transform(lemmas)
feature_names = vect.get_feature_names()
dense = features.todense()
denselist = dense.tolist()
print(denselist)

And after a minute of googling found that the solution seems to be to start jupyter with jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000

However when I do that using Anaconda Prompt it opens a different jupyter window with a different list of files (it looks like system files, stuff with a .exe extension). This My existing Jupyter notebook and Jupyter lab don't register the increased data rate limit and Anaconda Prompt returns permissions errors if I try to open any of the files, add anything new, or upload existing jupyter notebooks.

I can't seem to find any documentation on this - any ideas on how to fix?

Environment

Most helpful comment

I'm not familiar with Anaconda Prompt...

Have you tried jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000 in the regular console/terminal?

All 10 comments

I'm not familiar with Anaconda Prompt...

Have you tried jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000 in the regular console/terminal?

  • When you launch Jupyter from the command prompt, it shows you the folder you were in before you launched it. You can move around folders in the command prompt with the cd command (change directory).
  • You can also set the same option in a config file, so you don't need to use the command line to set it.
  • Lastly, the rate limit is there for a reason - if you're hitting it, you're probably trying to print something very large, which might make your browser slow down or crash. If denselist is a very long list, you might want to look at the first hundred elements, instead of the whole thing: print(denselist[:100])

@gnestor : 'Anaconda prompt' is the way many Windows users launch Jupyter from a command line - it's a standard Windows command prompt with some environment modifications to ensure Anaconda is on PATH. I'm guessing @LizMGagne usually launches Jupyter from the Anaconda GUI launcher.

@gnestor: @takluyver is correct in that I typically launch Jupyter from the Anaconda GUI. I experience the same issue with the regular terminal

@takluyver : I've tried setting this up in the config file but it doesn't seem to stick. I'm also trying to print something quite large, though in theory I could print in chunks even though it would be less efficient. Even specifying [:100] returns the error - I have to go down to about [:25] to print.

The default data rate limit is 1MB/s, so if even 100 items is too much, that suggests each item is over 10KB in size. Does that sound right?

@takluyver that's possible.

I'm not familiar with Anaconda Prompt...

Have you tried jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000 in the regular console/terminal?

I run the programs using anaconda prompt
i dont use jupyter notebook directly

is there any way to set the data rate limit there

@10mes44 What command do you use to run the jupyter notebook?

I use Anaconda Navigator GUI to launch the notebook anyway to modify the config with that?

I ran into this error:

IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`.

While trying to run tfidfvectorizer like so:

lemmas = TIP_with_rats['s_lemmas_IP'].apply(lambda x: ' '.join(x))
vect = sklearn.feature_extraction.text.TfidfVectorizer()
features = vect.fit_transform(lemmas)
feature_names = vect.get_feature_names()
dense = features.todense()
denselist = dense.tolist()
print(denselist)

And after a minute of googling found that the solution seems to be to start jupyter with jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000

However when I do that using Anaconda Prompt it opens a different jupyter window with a different list of files (it looks like system files, stuff with a .exe extension). This My existing Jupyter notebook and Jupyter lab don't register the increased data rate limit and Anaconda Prompt returns permissions errors if I try to open any of the files, add anything new, or upload existing jupyter notebooks.

I can't seem to find any documentation on this - any ideas on how to fix?

Not sure this is still of interest, but what @LizMcQuillan described as "opening a different window" means that the file tree will be different when opening Jupyter notebook from the Anaconda prompt. You will normally end up somewhere in the Windows system files if you do not navigate to the "user" directory (or more specifically: your Jupyter notebook directory) in the command line first.

Windows users can use "cd" and "dir" to navigate to different directories.

For me, the correct way to open Jupyter with new data limit from the Anaconda Prompt in Windows10 is:

(base) C:\Users\mobarget\Google Drive\Jupyter Notebook>jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10

I use Anaconda Navigator GUI to launch the notebook anyway to modify the config with that?

Not sure what you mean by that. But I do not think there is a way to change the data limit in any Anaconda GUI. However, opening the Prompt or PowerShell isn't too difficult. In Windows, you can type "Anaconda" in the task bar search box, and the Prompt/PowerShell will come up. Make sure to run them as "administrator".

grafik

Was this page helpful?
0 / 5 - 0 ratings