I got the following error when saving a notebook.
File Save Error
'_xsrf' argument missing from POST
Hi @dclong, did this happen when refreshing a JupyterLab page after restarting the notebook server?
If you restart the server and it gets a new token that differs from the previous one, then the client will attempt to use the previous token, which will be rejected by the server. The server will then expect to see the _xsrf authentication, which would not have been provided by the client because it is using token authentication. We cannot provide both forms of authentication to the server.
I believe the way to avoid this is to use a password (jupyter notebook password).
@blink1073
I'm using JupyterLab via JupyterHub. I don't know whether JupyterHub restarted the notebook server or not. JupyterHub is authenticated using password, however, I'm not sure how the underlying notebook server is authenticated. I need to read docs about JupyterHub on this.
Does this give a string value when entered into the browser web console? JSON.parse(document.getElementById('jupyter-config-data').textContent).token.
Also, does this give an answer that is not -1: document.cookie.indexOf('_xsrf')?
JSON.parse(document.getElementById('jupyter-config-data').textContent).token returns an empty string.
document.cookie.indexOf('_xsrf') returns 0
But as I mentioned I'm using JupyterHub and the issue is gone right now. I'll have a try of these 2 commands immediate if I come across the same issue again.
Great, thanks!
FYI, I had the same experience of a 403 error mentioning "'_xsrf' argument missing from POST" in the jupyter.log. Like @dclong I was using JupyterHub and I confirmed with the admins that it had not been restarted around the time of the problem.
It appeared to happen spontaneously: saving was working fine, and then 10 minutes later I started getting the 403 errors and couldn't save. I had been using the notebook for several days.
Anecdotally I'm under the impression that saving becomes flaky if the notebook loses the network connection unexpectedly, e.g. if I close my laptop without remembering to first close the browser window running the notebook from a remote JupyterLab / JupyterHub. When I come back to the notebook in that case sometimes it works, and sometimes it has problems saving after further updates. I've never had a problem when I've remembered to cleanly close the window (while leaving the notebook running on the remote server) and then reconnect from a new window later. I can't back this up with reproducible examples, but I mention this in case it is a clue.
does anyone have a method to recover from this? I would really like to not lose my work when this happens.
@xguse At least in my case restarting the notebook server without closing the notebook window allowed me to save the notebook.
Meet '_xsrf' argument missing from POST too when export draw.io to image.
Encountered same problem. Here's how to reproduce (at least in Firefox on ubuntu):
pkill firefoxI get the same error when trying to export draw.io to image format.
JSON.parse(document.getElementById('jupyter-config-data').textContent).token returns a long string. I believe it's a token?
document.cookie.indexOf('_xsrf') returns 0
Here are other outputs from the web console:

Can someone please explain what may be causing this, and any possible walk-arounds? How would I export a drawio file to an image in this case?
Had the same error spontaneously using Jupyter Notebook, I shutdown all the running notebooks, it took a few minutes and everything returned to normal, the autosaved message from the notebook appeared, I restarted the kernels and continued my work.
I lost 2 days work because of this, when i restarted, I could not see my work, and I had closed my old browser window after restart, is there a way to recover from this ?
I had the same issue today. Any chance someone is working on this bug?
Thanks
@bdoury - was it with drawio for you too?
Edit: never mind, I see it's a much larger issue than drawio
@jasongrout, No it was not. To me it looks like the issue started while I was trying open a connection using cx_Oracle. After restarting my python environment and jupyterlab, the issue disappeared and I was not able to reproduce it. Sorry that I cannot be more specific.
I can also reproduce the error via Amazon EC2 and JupyterLab.
It happens after my EC2 instance is stopped (while my JupyterLab is still up). I am also using the token, as opposed to a password
I sometimes get this if I am returning to a jupyter notebook (not lab) that has been running for some time, particularly if I am working on my Windows 10 laptop. I have found that if I open the same notebook again in a new window then it fixes the problem, and I can then close the newly opened notebook and continue working in the previously opened one.
There is a hand-wavy fix to this problem.
I fixed this by simply closing and opening again my JupyterLab tab on Mozilla Firefox. I can't really find the root cause of the issue. I did have the notebook and Jupyter lab open for over 2 days which may have contributed to that error.
This happened to me a few times, and the problem fixed itself after a while from my laptop disconnecting and reconnecting to wifi. I essentially closed my laptop and opened it again a few times over the span of a few hours and eventually the notebook was able to reconnect and none of my work was lost. This definitely isn't a fix to the issue, but if you're in a pinch and worried you'll lose work, give this a try.
I just ran into this issue. Possibly also from reconnecting wifi and/or laptop suspend.
IHowever, after doing either
A) "Download .ipynb" (which got me a few hours old version) or B) "Make a copy" (opens old version in new window) the issue somehow resolved itself. The next "Save" was successful and stored the latest version. Maybe it updated the token somehow? So this is a possible workaround to avoid losing work.
I just ran right now at this issue. Restart Jupyter lab works just fine. Copy-paste all the code on temp file (don't lose your code!).
UPD: It seems my brain turned on and I found my NoJS addon blocked jupyter lab.
Console log error:
[W 12:29:23.308 LabApp] Could not determine jupyterlab build status without nodejs
I just ran into this issue. Possibly also from reconnecting wifi and/or laptop suspend.
IHowever, after doing either
A) "Download .ipynb" (which got me a few hours old version) or B) "Make a copy" (opens old version in new window) the issue somehow resolved itself. The next "Save" was successful and stored the latest version. Maybe it updated the token somehow? So this is a _possible_ workaround to avoid losing work.
Just want to confirm that "Download .ipynb" allowed the next save to work, as for @jonnor . I left the notebook open over the weekend on a Windows 10 laptop.
@Zohaggie the workaround you provided works flawlessly. Thanks!
'_xsrf' argument missing from POST error just happened to me, running locally on my Mac.
The notebook wouldn't save, then I got "Kernel error" when trying to restart the kernel. I downloaded as .ipynb, but when opening that notebook, it gave the same error. It also didn't have my last work on a long markdown cell with lots of math.
My Jupyter session had been running for several days, and I had that notebook open for at least 3 days鈥攆ortunately, pushing my work to GitHub. Only one markdown cell lost, but I copied to clipboard before exiting.
Just had this error. What worked for me was to open a different notebook. Then save works for everything again.
Thanks everyone for the comments here, hopefully that will help us to debug it.
I just received this error. Working on jupyter notebook locally which has been running for several days.
I had pressed save manually and naively thought perhaps closing and reopening from a save would fix the issue. However I lost several hours of work as the notebook reverted to one from a few hours ago.
I was wondering if the manual saves could be found anywhere in the log or am I SOL?
Using safari 12.0 Mac OS Sierra with uBlock Origin extension running, in case this is somehow due to the browser. (However these notebooks are all local.)
@afshin Is it possible that this problem will be fixed by https://github.com/jupyterlab/jupyterlab/pull/6005/commits/8626663c62cb63f8510d4453a3ed83e9159a91d8?
It is possible, yes. Let's loop back when that PR is in.
I've encountered the same error, the notebook had been open for a few days, and the error appeared during a routine autosave.
The notebook in question was dealing with some map data visualisation. It happened during a WiFi connection drop as others mentioned here, however, my notebook was entirely local.
Can you try now that the 1.0a3 prerelease has been released?
I got this error on Firefox when I accidentally deleted the Jupyterlab browser tab and then restored it with Ctrl-Shift-T ("Undo the close of a window.").
Encountered the same issue on a local notebook that'd been open for only a few hours. Jupyter version 4.4.0. My code messed only with local files and the error came up while I was out for lunch, Notebook was idle. Had trouble with restarting Kernel as well.
_Edit: Downloaded the file into a new folder, opened up the new file and the issue resolved on the old one, which was still open._
For reference, this is the PR that should have fixed the behavior: https://github.com/jupyterlab/jupyterlab/pull/6141. It will be in the next (alpha 4) release.
Good news.
Worked for Jupyter notebook using Edge, Windows 10 Pro, 64bit, anaconda prompt still open of course.
The version of the notebook server is: 5.7.4
The server is running on this version of Python: Python 3.7.1
It reloaded the page for quite a long time, exactly the time it would need to open the file at a fresh start as it is quite large. When it finally reappeared, any even unsaved changes of the last seconds were still there. This includes changes where I could not save anymore due to the "_xsrf' argument missing from POST" error. So this is not a reaload of the file, but just of the page. All of the objects that I had loaded were also still in memory.
p.s.: Of course, at first I have downloaded the file as .ipynb and also as html for being safe in any case.
p.s.s.: A collegue who does not know IT Crowd was just going to add "Have you tried turning it off and on again?"
I confirm that lorenzznerol's solution is working.
Guys is there any way to recover lost work after closing the browser? I got the same bug and work further for 3 days, autosave was not generated, closed browser and opened it again.
I am using Jupyterhub with Jupyter lab. I see that the browser is making a request to /user/*/api/sessions and the cookie sent has the _xsrf set to a string. However, the server responds with:
{"message": "'_xsrf' argument missing from POST", "reason": null}
Not sure if the server is expecting the _xsrf string as part of the json object.
I experienced the same issue today. I'm using jupyter lab with python 3.6 in a conda environment. My only way was to download the dio file, open it on draw.io online and them export it to png. It doesn't help to solve the problem, but it works for who, like me, need the image and don't have time to test a lot.
Same here, jupyter-lab was running overnight. Logged in remotely in the meantime to see the current state. Then back in my office, continuing to work on it, the error appeared out of the blue.
Python 3.7.4
I just noted that it happened simultaneously in a jupyter notebook that was running on a different port.
I had this problem as well and lost my data.
However, I think my problem was that that the directory I was in in the parent jupyter browser session looked messed up showing 'Self%20Study' instead of just 'Self Study'.
I couldn't create a new file either and was getting this same "'_xsrf' argument missing from POST" error.
I didn't restart the session. I clicked on the parent directory and back into the 'Self Study' directory and that fixed the error. Unfortunately, not before having lost my work.
Hope that helps someone.
The following solution was the only which worked for me without losing my work:
Just open another notebook on the same kernel, and the issue is magically gone; you can again save the notebooks that were previously showing the _xsrf error.
After that I had hit the "save" menu entry and the error was gone. Credits go to https://stackoverflow.com/a/55601395/1444073
I have this problem today for the first time. Can't save notebook, rename it, run it or restart the kernel.
The solution of opening another notebook with the same kernel didn't work initially. It said "An error occurred when opening notebook. '_xsrf' argument missing from POST."
The recent terminal readout looks something like this:
[W 11:27:04.650 NotebookApp] 403 POST /api/sessions (127.0.0.1): '_xsrf' argument missing from POST
[W 11:27:04.650 NotebookApp] '_xsrf' argument missing from POST
[W 11:27:04.650 NotebookApp] 403 POST /api/sessions (127.0.0.1) 1.25ms referer=http://localhost:8888/notebooks/Documents/.../Untitled.ipynb?kernel_name=conda-env-torch-py
UPDATE:
But when I opened a new browser window (i.e. http://localhost:8888/tree/Documents/...) and opened another copy of the affected notebook, the original notebook kernel started working again. So I could close the new copy and then save the original one with the recent changes.
Mac OS X 10.13.6
I've had the same thing happening, also after using the cx_Oracle module in Jupyter Notebook. The issue disappeared after restarting Jupyter Notebook.
Hi
I just faced the same issue. It wasnt allowing a Save or a New File creation with the error message - '_xsrf' argument missing from POST
One quick dirty way to manage the situation was:
a. Right click on the existing browser tab
b. Click Duplicate
c. It opened up the exact same contents and lines of code etc
d. Since this was opened up now and we are immediately saving it, there is probably very less opportunity for the token to change (refer to comments above from blink1073) and thus, I was able to conduct both actions - File Save, and New File
Hope this helps.
chucknotech no good for me. I received a 'forbidden' when trying to duplicate.
Not sure what your exact issue is. My previous solution was just quick and dirty. You have to consider what your environments are. For example, my problem was related to tornados. I uninstalled Anaconda and reinstalled the same and most of the problems were solved. If you want to test out without the uninstall and reinstall, here are few solutions at the Anaconda prompt:
a. pip install tornado --5.1.1 (please check out exact command)
If its a tornado problem (if its a kernel connection issue, mostly its a tornado issue)
F5 is all you need, please see my comment in summer 2019. I do not know why I was downvoted for this, as it clearly worked for me, and one other person could confirm this as well. It could be that there were downvotes from people who did not try it, because they did not leave any comments. I cannot guarantee that F5 is always working, of course, but it rather seems that there are downvotes without experience. If that is the case, it does not shed a good light on them. If it is not the case, they still should have dropped a comment and explain their downvote.
I sometimes get this if I am returning to a jupyter notebook (not lab) that has been running for some time, particularly if I am working on my Windows 10 laptop. I have found that if I open the same notebook again in a new window then it fixes the problem, and I can then close the newly opened notebook and continue working in the previously opened one.
The problem just happened to me, and this solution worked for me. Thanks @ChrisPalmerNZ !
The only way that I can come up with is to restart Jupyter and open the same Jupyter file and copy the unsaved contents into the just-open file. Finish, and shut the previous one down
Got this with jupyter lab and the Diagram editor jupyterlab-drawio when trying to save a drawing.
I accidentally fixed this in windows by going to the anaconda prompt and typing 'control+x'. I meant to type 'control+c' to kill the process but then the process started printing that is was saving the files and the '_xrsf missing from post' at the top of the notebooks disappeared...
There is a method that you could try that works for me, you keep open your Notebook that you cannot save (to not lost your work), then go on Jupyter Console where you can navigate through you files, go to the folder where you have your notebook select the notebook facing this issue, open it on another page, you let it until the kernel is connected then you can close this new window. once you come to your old Notebook you will find that the issue is fixed. Hope that it helps
Easy solution:
Copy the URL link to login with a token when you connect for the first time. It should look like this
http://localhost:8888/?token=f1a75b69989e62e...
Paste the link into a new tab in your browser where you're working and close the old "Home" one. Everything should work now :)
The problem is a nightmare. I lost my work.
@youweiliang I lost mine. It just started happening randomly.
Similar issue for me (running JupyterLab 2.1.4).
In my case I'm getting "The network connection was lost." message when saving notebook, but I'm able to run the kernel, add new notebooks and basically do everything apart from saving the notebook files.
It's probably more related to the size of the Notebook, if I remove the Plotly graphs then it is saved.
Although the notebook with graphs is 4Mb, so not too big.
This security is ridiculous. It doesn't do anything for anyone.
Assuming that you are running Jupyterlab behind a reverse proxy, you need to take the _xsrf cookie value and set the header X-Xsrftoken. For example, in the context of openresty you could do:
rewrite_by_lua_block {
local xsrf = ngx.var.cookie__xsrf;
ngx.req.set_header("X-Xsrftoken", xsrf);
}
Doing the above works for me.
Most helpful comment
I sometimes get this if I am returning to a jupyter notebook (not lab) that has been running for some time, particularly if I am working on my Windows 10 laptop. I have found that if I open the same notebook again in a new window then it fixes the problem, and I can then close the newly opened notebook and continue working in the previously opened one.