Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG/ HELP
NGINX Ingress controller version: 0.18.0
Kubernetes version (use kubectl version): 1.11.1
Environment: Test/Acc/Prod
What happened: The Ingress controller gives a 400 error with a certain GET when a request url/header is "too long".
What you expected to happen:
The request is passed on to the correct service and pod.
How to reproduce it (as minimally and precisely as possible):
We use keycloak for authentication. When a user logs in a GET is done with an access token that is generated by keycloak. The access token gives a user certain rights within the application. Users have roles that gives them other/more permissions. When a user has a lot of roles the access token get significally longer which causes the 400 on the ingres controller. With a user with less roles it works fine and we can see the GET request being passed on to the right service.
A request header with 2299 characters works, but one with 4223 doesn't.
Anything else we need to know:
We already tried adjusting header buffer sizes etc from 4k to 8 and 16k but that didn't do anything.
I set the ingress controller loglevel to debug, but it doesn't give any more info on the 400 error.
@bramvdklinkenberg please post the error that appears in the ingress pod log.
If you are using SSL, the parameter to adjust is http2_max_field_size
Hi @aledbf , see attached file. This is a part of the logging.
nginx.log
I also edited htt2_max_field_size from 4k to 16k but that didn't help.
@bramvdklinkenberg how are you adjusting the size of the headers?
Please use https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#large-client-header-buffers to set 4 16k
Hi @aledbf , I edit the controller config map and just be sure I delete the controller pod. When I check the nginx.conf in the pod I can see that the setting has been changed.
I also tried the large client header buffer with 16k and 32k but also that didn't help.
@bramvdklinkenberg two things:
From the log I see
2018/09/14 11:12:28 [debug] 183#183: *13 http write filter 0000000000000000e/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" 4170 0.002 [infoplus-service-notification-8080] 10.244.5.19:8080 5 0.004 400
the end (400) means your application is returning that code, not NGINX.
2018/09/14 11:12:28 [debug] 183#183: *13 http vts handler
From this and the format of the log, it seems you are not using 0.18.0. Please confirm this
@aledbf ah ok, will look into that to see if we can find the issue in the application.
See I was still using 0.14.0.. sorry. Had everything upgraded to 0.18.0 except this cluster.
@aledbf , we also made changes in tomcat (header size) and it works now! thanks for the help!!