Ingress-nginx: Ingress with claims auth multiple attributes does not work with EDGE browser

Created on 2 Dec 2018  路  14Comments  路  Source: kubernetes/ingress-nginx

**Is this a request for help? Yes

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):No Duplicates


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

NGINX Ingress controller version: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:06:30Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:AKS Azure : 1.11.4

  • Cloud provider or hardware configuration:Azure
  • OS (e.g. from /etc/os-release):Ubuntu 16.04
  • Kernel (e.g. uname -a):4.15.0-1030-azure
  • Install tools:
  • Others:

What happened:
Created a simple .net core app using B2C on azure -all browsers works fine ,except edge browser ,getting the following :

Hmmm...can鈥檛 reach this page Try this Make sure you鈥檝e got the right web address: https://edgeb2cbug.thegbsguy.com Search for "https://edgeb2cbug.thegbsguy.com" on Bing Refresh the page Details The connection to the website was reset. Error Code: INET_E_DOWNLOAD_FAILURE

What you expected to happen:
website to be reached as other browsers.

How to reproduce it (as minimally and precisely as possible):
please see the attached word document :

NGINX B2C AZURE REPRO STEPS.docx

Anything else we need to know:
interesting findings :
-if i use Fiddler with https ,edge works no issue

  • if you reduce the list of application claims it also works.
  • if you run the application stand alone -not using repro this works also.

i suspect a combination with nginx controller redirecting traffic and somehow in a loop.

any chance you can help me me mitigate this behavior.

thanks in advance.

lifecyclrotten

Most helpful comment

Hi all,

I had the same issue and I fixed it setting the "http2-max-field-size" value to 16k (by default is 4k).

The fact that with fiddler the problem disappear, it is because fiddler doesn't support HTTP2 yet so, the connection changes to HTTP 1.1

Hope this will help ;)

All 14 comments

I'm seeing this exact same problem - I thought I was going crazy. Did you make any progress on getting to the bottom of the problem?

Some additional information based on what I've seen:

1) This affects both Internet Explorer and Edge
2) The browser makes multiple attempts to fetch the page - inspecting the requests shows them all to be stuck at "Pending" with no response information pending.
3) In the nginx logs, I do see each of the failed requests coming in, though they are missing the request information, i.e. the request method and path, are missing and the response code is "000". These requests don't show up in the application logs, so seem to be terminated here. Here's an example of one entry in the nginx logs (ip addresses obfuscated):

xx.xx.xx.xx - [xx.xx.xx.xx] - - [12/Feb/2019:13:33:02 +0000] "-" 000 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/18.17763" 4419 0.000 [] - - - - aaabd5ec460c97d1c15f3b2a78c778ac

As @digeler noted, running Fiddler to try to understand more about the connection problems causes things to work. I'm wondering if there's some sort of negotiation problem going on here causing the connection to get dropped?

Ok, I managed to work around this in a similar way. This problem isn't specifically to do with claims, but it does have something to do with cookie sizes, or probably more generally header sizes. I'm authenticating against a custom IdentityServer build, and was previously saving the access token in the cookie. By turning this option off, the cookie size is reduced significantly and things start working again.

@mikegoatly what was your work-around?

Our work-around was to enable ssl passthrough on nginx and handle the ssl termination on the pod side.

@havarnov "Work around" is probably a bit strong. Basically I reduced the size of the data in the cookies header by no longer including the auth token in it - fortunately I'm early enough in the development of the system that I'd not made use of it yet so it didn't have any impact for me.

In essence this would have the same effect as cutting out some of the claims as in the originally reported issue.

It's good to know that SSL passthrough is another option though. I think that I've read somewhere this does have some performance implications, but even if that was negligible it would definitely increase the complexity of our deployment, so I'll be avoiding it if I possibly can.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

I have the exact same problem.
I'm running an on-premise Kubernetes setup (v1.14.1)
with a custom IdentityServer4 microservice, and an VUE SPA application using the IS 4 for login.
Everything work fine in Chrome, FireFox, but not IE and Edge.

I'm using the following NGINX Ingress controller as my reverse proxy
Release: 0.24.1
Build: git-ce418168f
Repository: https://github.com/kubernetes/ingress-nginx

The only special setting i have been used to setup is this:
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
and the SSL temination
tls:

  • hosts:

    • mylocalhostname

      secretName: ingress-tls-secret

      rules:

  • host:
    http:
    paths:

    • backend:

      serviceName: servicename

      servicePort: 80

The proxy-buffer-size is required for Chrome and Firefox to be allowed through for some users.
This seems to be a question about how many claims the user gets added to the authentication cookie.

I have observed that when Edge/IE make the multiple attempts to fetch the page, it keeps doubling the headers content.
So the header start out like this:
Accept-Encoding: gzip, deflate, br
and ends like this:
Accept-Encoding: gzip, deflate, br, gzip, deflate, br, gzip, deflate, br, gzip, deflate, br, gzip, deflate, br, gzip, deflate, br, gzip, deflate, br

The same happens for all the Request Headers, including the cookie.
I have observered the same logs as @mikegoatly in the nginx log output
x.x.x.x - [x.x.x.x] - - [07/Jun/2019:12:19:33 +0000] "-" 000 0 "https://mylocalhostname/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/17.17134" 7677 0.000 [] - - - - 57324f9c898951397def69610370d09b

Using Fiddler2, i have observered that then all the problems go away, and IE/Edge are both working.
If I bypass nginx and let all the K8s services handle the SSL termination them self, all works fine also.

Hi all,

I had the same issue and I fixed it setting the "http2-max-field-size" value to 16k (by default is 4k).

The fact that with fiddler the problem disappear, it is because fiddler doesn't support HTTP2 yet so, the connection changes to HTTP 1.1

Hope this will help ;)

Can confirm that using the these settings works:
http2-max-field-size: "16k"
http2-max-header-size: "64k"

Thanks @cpunella

Can confirm the addition of:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {Helm Deployment Name}-nginx-ingress-controller
  namespace: ingress
data:
  http2-max-field-size: "16k"

Resolved our issues.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bashofmann picture bashofmann  路  3Comments

oilbeater picture oilbeater  路  3Comments

silasbw picture silasbw  路  3Comments

yuyang0 picture yuyang0  路  3Comments

cehoffman picture cehoffman  路  3Comments