When debugging SPFx web parts and extensions locally, the web server that is used for gulp serve returns the requested files.
The SPFx Web Server does not return the requested files but instead delivers an HTTP 400 error. This seems to be due to ASP.net Core cookies that are sent to the SPFx Web Server due to the fact that on the development environment the web server that is hosting the Web Part as well as two Web Servers that are hosting the ASP.net Core Web Service are all accessible through https://localhost (but with different ports). The cookies for the ASP.net Core Web Services are also sent to the SPFx Web Server and cause the issue, because after deleting all cookies for "localhost", the Web Part loads for one request only. After that, the ASP.net Core applications have set its cookies for authentication and the SPFx Web Server returns "Bad Request".
If only one of the Web Services is running, everything works (as the cookie size is smaller). If both Web Services are running, the cookie size doubles (due to two authentication tokens) and the SPFx Web Server crashes.
Thank you for reporting this issue. We will be triaging your incoming issue as soon as possible.
I think you're misunderstanding what gulp serve is don't... it's not creating a "SPFx Web Server".
That command simply starts up a Node.js based web server that serves static files... there's nothing dynamic going on... it just responds to HTTP GET requests. It's not going to process any auth requests or cookies.
Your scenario isn't clear from your post... maybe explain more what you're trying to do to provide more context would help...
I understand what happens behind the scenes and that it is only serves static content. Nonetheless, I think the term "SPFx Web Server" is appropriate, as it is a static web server that does host SPFx related files.
I try to explain the scenario and the issue again:
Our scenario is that we have to develop a custom application for a customer using SharePoint 2019. The application uses SPFx Web Parts for the front end and has two different Web Services in the back end. These Web Services are individual ASP.net Core solutions. When we debug all components together, we have three web servers running on our local dev machines using different ports: the Node.js based web server for the SPFx Web Parts and the two ASP.net Core Web Services (running in IIS Express).
The Web Services both uses Azure AD Authentication and - since SPFx 1.4 does not support Azure AD authentication as it does later on - we have to use the iframe workaround mentioned above: a hidden iframe tries to load an Web Page in the ASP.net Core project, is redirected to Azure for authentication and gets the cookie after successful login (this is successful, because the SharePoint uses Azure authentication as well). At the end, we have two sets of authentication cookies for "localhost" as each Web Service has their own authentication (to avoid collisions, the cookie name has to be changed in both ASP.net Core Web Service so that both Web Services do not use the default cookie name and override each other).
When the SharePoint page with the SPFx Web Part loads for the first time, the resources of the Web Part load correctly, the SPFx Web Part is rendered correctly, the authentication works, two sets of authentication cookies are stored for "localhost" and the Web Services are invoked. When the site is reloaded, there already exist cookies for "localhost" and they are sent to the Node.js web server when the browser requests the files for the Web Part. Unfortunately, the Node.js web server does only respond with a HTTP 400 result (based on the network tab in the browser). The log output of the running gulp serve process also does not show this request.
When the cookies for localhost are deleted before the request, everything starts to work again for a single refresh of the page.
The 400 definitely sounds like it is related to the cookie. Such as a cookie mix-match, duplicate cookie, or invalid cookie.
Can you confirm that the cookies are being passed correctly? e.g, the cookie that was used to send to WebServer/Port A the first time, is the same cookie being sent the second time you make the request? Can you confirm the correct cookie is being used on the server side? I wonder if the opposite cookie is being sent and causing the 400.
Edit - Removed the first part of my response as I misread your comments.
What does "reload" mean? Because at the bottom you say when you refresh the page, it will work. So what the process of reloading, is this the reloading of your gulp server?
With reload/refresh I mean a refresh in the browser (either F5 or CTRL+F5). It starts to work when the cookies are deleted before this refresh.
I will investigate the other points tomorrow when I am back at work.
we have to use the
iframeworkaround mentioned above: a hiddeniframetries to load an Web Page in the ASP.net Core project, is redirected to Azure for authentication and gets the cookie after successful login (this is successful, because the SharePoint uses Azure authentication as well). At the end, we have two sets of authentication cookies for "localhost" as each Web Service has their own authentication (to avoid collisions, the cookie name has to be changed in both ASP.net Core Web Service so that both Web Services do not use the default cookie name and override each other).
This sounds suspiciously unlike an oAuth2 compliant handshake. You should not be taking any cookies and placing them anywhere.
Just a short sign of life - I have not forgotten the issue but my plan for this week has been changed. I hope that I can check the cookie issue on Friday.
I was able to investigate the issue a bit further (after we had a very similar issue in a Non-SharePoint project) and the root cause is a security patch in Node that was done in November of last year (see https://github.com/nodejs/node/commit/186035243fad247e3955fa0c202987cae99e82db). This patch limits the maximum header size to 8kb. As the ASP.net cookies with Azure Authentication are pretty big, this can cause the issue very quickly on development systems.
A workaround for development purposes is switch to b2clogin (see https://docs.microsoft.com/de-de/azure/active-directory-b2c/b2clogin; did not work for me) or use a different (dummy) user that might have a smaller token (worked for me).
So from my point of view this is an underlying issue with node.js and not specific to SPFx.
Issues that have been closed & had no follow-up activity for at least 7 days are automatically locked. Please refer to our wiki for more details, including how to remediate this action if you feel this was done prematurely or in error: Issue List: Our approach to locked issues
Most helpful comment
I was able to investigate the issue a bit further (after we had a very similar issue in a Non-SharePoint project) and the root cause is a security patch in Node that was done in November of last year (see https://github.com/nodejs/node/commit/186035243fad247e3955fa0c202987cae99e82db). This patch limits the maximum header size to 8kb. As the ASP.net cookies with Azure Authentication are pretty big, this can cause the issue very quickly on development systems.
A workaround for development purposes is switch to b2clogin (see https://docs.microsoft.com/de-de/azure/active-directory-b2c/b2clogin; did not work for me) or use a different (dummy) user that might have a smaller token (worked for me).
So from my point of view this is an underlying issue with node.js and not specific to SPFx.