NGINX Ingress controller version: 0.10.0
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher3", GitCommit:"772c4c54e1f4ae7fc6f63a8e1ecd9fe616268e16", GitTreeState:"clean", BuildDate:"2017-11-27T19:51:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a):What happened:
Session affinity using a cookie does not work, even if these 3 annotations is set on the ingress:
I think the Nginx configuration, generated from the Ingress resource, is incorrect (see below).
What you expected to happen:
A cookie "route" should be set in response to the 1st request (that isn't the case).
Then, all other requests should provide this cookie.
The IngressController should then forward all these requests to the same backend pod.
How to reproduce it (as minimally and precisely as possible):
Hereby a complete configuration.
1) create an echo service:
#######################################################################################################################
# Deployment with at least 2 pods used as backend servers.
#######################################################################################################################
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: echo-server
spec:
# at least 2 backends to test sticky session
replicas: 2
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo-server
image: gcr.io/google_containers/echoserver:1.8
ports:
- containerPort: 8080
#######################################################################################################################
# Service to access backend pods
#######################################################################################################################
---
apiVersion: v1
kind: Service
metadata:
name: echo-server
spec:
# service only expose internally. Using an Ingress to access it.
type: ClusterIP
ports:
- name: http
port: 8080
selector:
app: echo-server
2) create the IngressController and all other mandatory resources:
# IngressControler officiel bas茅 sur Nginx
# https://github.com/kubernetes/ingress-nginx/blob/0.10.0/deploy/README.md
#######################################################################################################################
# Create namespace
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml
#######################################################################################################################
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
#######################################################################################################################
# Create default backend deployment and service
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml
#######################################################################################################################
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
#######################################################################################################################
# Create ConfigMap with Nginx configuration.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml
#######################################################################################################################
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
#######################################################################################################################
# Create ConfigMap with Nginx configuration for TCP services.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml
#######################################################################################################################
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
#######################################################################################################################
# Create ConfigMap with Nginx configuration for UDP services.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml
#######################################################################################################################
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
#######################################################################################################################
# Create IngressController without RBAC (= Role Based Access Control).
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml
#######################################################################################################################
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
initContainers:
- command:
- sh
- -c
- sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
image: alpine:3.6
imagePullPolicy: IfNotPresent
name: sysctl
securityContext:
privileged: true
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
# only process Ingress annotated with this class
- --ingress-class=nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
#######################################################################################################################
# Expose the IngressController as a "NodePort" service.
# Statically set the published ports (notePort attributes) : 30080 for HTTP / 30443 for HTTPS.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
#######################################################################################################################
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
# publish HTTP on port 30080
nodePort: 30080
protocol: TCP
- name: https
port: 443
targetPort: 443
# publish HTTPS on port 30443
nodePort: 30443
protocol: TCP
selector:
app: ingress-nginx
3) create the Ingress to access the echo-service through this IngressController:
#######################################################################################################################
# Ingress to access echo service.
#######################################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-server
annotations:
# define the class so that this Ingress is only proceed by the IngressController named "nginx-ingress-controller".
kubernetes.io/ingress.class: "nginx"
# define sticky session annotations as describe here:
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: echo-server
http:
paths:
- backend:
serviceName: echo-server
servicePort: 8080
4) Declare "echo-server" in your /etc/hosts file. The IP address should be one of your node:
11.22.33.44 echo-server
5) In your browser, open a web developer console to see request and response headers.
6) Access the URL http://echo-server:30080 and check the headers :
Anything else we need to know:
You can retrieve the generated Nginx configuration like this:
1) Find your IngressController pod:
$ kubectl get po -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-66b447d9cf-tswgb 1/1 Running 0 30m
nginx-ingress-controller-8fcd569fc-r5sk4 1/1 Running 0 30m
2) Dump the nginx configuration on your computer:
$ kubectl exec nginx-ingress-controller-8fcd569fc-r5sk4 -n ingress-nginx -- cat /etc/nginx/nginx.conf > /tmp/nginx.conf
3) In the /tmp/nginx.conf you can see the following upstream servers:
upstream sticky-default-echo-server-8080 {
sticky hash=sha1 name=route httponly;
keepalive 32;
server 10.42.210.114:8080 max_fails=0 fail_timeout=0;
server 10.42.31.243:8080 max_fails=0 fail_timeout=0;
}
upstream default-echo-server-8080 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.42.210.114:8080 max_fails=0 fail_timeout=0;
server 10.42.31.243:8080 max_fails=0 fail_timeout=0;
}
But no configuration uses the "sticky-default-echo-server-8080" in the server section:
## start server echo-server
server {
server_name echo-server ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
port_in_redirect off;
set $proxy_upstream_name "default-echo-server-8080";
set $namespace "default";
set $ingress_name "echo-server";
set $service_name "";
client_max_body_size "1m";
proxy_set_header Host $host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://default-echo-server-8080;
proxy_redirect off;
}
}
## end server echo-server
I never use Nginx but I think the issue is here.
Hitting the same issue, seems to be related to TLS. My two TLS ingresses work with sticky sessions, a new one without doesn't.
Nevermind, works on 0.10.2, just be sure to actually have more than one endpoint for testing (it doesn't send out cookies if you don't).
It does not work with 0.10.2 too.
Note that my example does not use TLS.
Besides, 2 endpoints are set (see "replicas: 2" for Deployment named "echo-server").
Verification : IngressController has been updated to v0.10.2:
# IngressController is updated to v0.10.2 :
$ kubectl describe po/nginx-ingress-controller-58b498d76c-zxzfd -n ingress-nginx
...
Containers:
nginx-ingress-controller:
Container ID: docker://bb661a20bb275c1649953d135aa0ffe9c0b5c1846039a5f0bc28dc0b8a865633
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
...
Verification : 2 endpoints are set for the service:
$ kubectl describe svc/echo-server | grep Endpoints
Endpoints: 10.42.114.158:8080,10.42.37.143:8080
Verification : Nginx configuration is still wrong. Upstream configuration "sticky-default-echo-server-8080" is never referenced in the server "echo-server" block.
$ kubectl exec nginx-ingress-controller-58b498d76c-zxzfd -n ingress-nginx -- cat /etc/nginx/nginx.conf
...
upstream sticky-default-echo-server-8080 {
sticky hash=sha1 name=route httponly;
keepalive 32;
server 10.42.37.143:8080 max_fails=0 fail_timeout=0;
server 10.42.114.158:8080 max_fails=0 fail_timeout=0;
}
upstream default-echo-server-8080 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.42.37.143:8080 max_fails=0 fail_timeout=0;
server 10.42.114.158:8080 max_fails=0 fail_timeout=0;
}
...
## start server echo-server
server {
server_name echo-server ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
port_in_redirect off;
set $proxy_upstream_name "default-echo-server-8080";
set $namespace "default";
set $ingress_name "echo-server";
set $service_name "";
client_max_body_size "1m";
proxy_set_header Host $host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://default-echo-server-8080;
proxy_redirect off;
}
}
## end server echo-server
Hello,
It works for me from 0.10.2.
Jeff
@jfpucheu : Jeff, does it work when you apply my sample configuration (just change image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.0 by image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2) or did you change something?
If it works, what is your environment (Cloud provider VS hardware configuration ? OS ? Install tools) and Kubernetes version?
I too am consistently running into this issue. In my scenario it appears the nginx.conf file is built properly w/ the sticky-... upstream, and the pod's nginx console log outputs the $proxy_upstream_name correctly (e.g. sticky-...). (details below)
Though all configurations seem correct, a cookie is no longer returned. To ensure the cookie was not simply being dropped by an intermediary, I've evaluated a curl response from the pod itself, and similarly no cookie is returned.
At this point I'm running out of ideas, maybe a possible load order?, maybe even a compilation/build specific issue with the plugin? Thoughts?
nginx configuration
http {
...
upstream sticky-stage-1-s1cas-cas-8080 {
sticky hash=md5 name=INGRESSCOOKIE httponly;
keepalive 32;
server 10.44.69.2:80 max_fails=0 fail_timeout=0;
}
...
server {
...
set $proxy_upstream_name "-";
...
location / {
port_in_redirect off;
set $proxy_upstream_name "sticky-stage-1-s1cas-cas-8080";
set $namespace "stage-1";
set $ingress_name "s1cas-cas";
set $service_name "s1cas-cas";
...
console logs
$ kubectl --namespace stage-1 logs -f --tail 100 s1cashttps-nginx-ingress-controller-759b6c89cc-glcjm`
73.xxx.xxx.xxx - [73.xxx.xxx.xxx] - - [31/Jan/2018:23:11:59 +0000] "GET /login HTTP/2.0" 200 3424 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.119 Safari/537.36" 281 0.008 [sticky-stage-1-s1cas-cas-8080] 10.44.69.2:80 3424 0.008 200
73.xxx.xxx.xxx - [73.xxx.xxx.xxx] - - [31/Jan/2018:23:12:01 +0000] "GET /login HTTP/2.0" 200 3410 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.119 Safari/537.36" 22 0.007 [sticky-stage-1-s1cas-cas-8080] 10.44.69.2:80 3410 0.007 200
curl from within the pod
$ kubectl --namespace stage-1 exec -it s1cashttps-nginx-ingress-controller-759b6c89cc-glcjm bash
$ curl -o /dev/null --http1.1 --resolve accounts.domain.tld:443:127.0.0.1:443 https://accounts.domain.tld/login -v
* Server certificate:
...
* issuer: C=GB; ST=Greater Manchester; L=Salford; O=COMODO CA Limited; CN=COMODO RSA Domain Validation Secure Server CA
* SSL certificate verify ok.
} [5 bytes data]
> GET /login HTTP/1.1
> Host: accounts.domain.tld
> User-Agent: curl/7.52.1
> Accept: */*
>
{ [5 bytes data]
< HTTP/1.1 200 OK
< Date: Wed, 31 Jan 2018 23:40:07 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 8023
< Connection: keep-alive
< Vary: Accept-Encoding
< Pragma: no-cache
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Cache-Control: no-cache
< Cache-Control: no-store
< Set-Cookie: JSESSIONID=mpw7r01ef0y4i03jzon87mt;Path=/;Secure;HttpOnly
< Vary: Accept-Encoding
< Strict-Transport-Security: max-age=15724800;
<
{ [3684 bytes data]
* Curl_http_done: called premature == 0
100 8023 100 8023 0 0 144k 0 --:--:-- --:--:-- --:--:-- 145k
* Connection #0 to host accounts.domain.tld left intact
@icereval The cookie is only being sent if you have more than 1 endpoint. You only have one.
@lorenz, that makes sense! I've retested w/ the replicas back at their normal size, and its working as expected now.
I had initially turned the replicas down to simplify troubleshooting, well apparently too far, oops...
Thanks for the quick response, and I can confirm 0.10.2 is working w/ my simple test setup.
$ curl -o /dev/null -s --location -D - https://account.domain.tls/login
HTTP/2 200
server: nginx
date: Thu, 01 Feb 2018 02:32:55 GMT
content-type: text/html;charset=utf-8
content-length: 8043
vary: Accept-Encoding
set-cookie: INGRESSCOOKIE=26b7af4429c0e7f7b19058dfb72886d0; Path=/; HttpOnly
pragma: no-cache
expires: Thu, 01 Jan 1970 00:00:00 GMT
cache-control: no-cache
cache-control: no-store
set-cookie: JSESSIONID=1584kdt4d4867sea3d6d7uhnl;Path=/;Secure;HttpOnly
vary: Accept-Encoding
strict-transport-security: max-age=15724800;
Finally, I find why my sample configuration does not work!
I confirm there is a bug with the Nginx IngressController.
The issue is related to the definition of the Ingress and the absence of the directive "path:".
Hereby 2 Ingresses that illustrate the different behaviors :
ingress-without-path.yml (sticky session does not work):
$ cat ingress-without-path.yml
#######################################################################################################################
# Ingress to access echo service.
#######################################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-server
annotations:
# define the class so that this Ingress is only proceed by the IngressController named "nginx-ingress-controller".
kubernetes.io/ingress.class: "nginx"
# define sticky session annotations as describe here:
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: echo-server
http:
paths:
- backend:
serviceName: echo-server
servicePort: 8080
ingress-without-path.yml (sticky session works):
$ cat ingress-with-path.yml
#######################################################################################################################
# Ingress to access echo service.
#######################################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-server
annotations:
# define the class so that this Ingress is only proceed by the IngressController named "nginx-ingress-controller".
kubernetes.io/ingress.class: "nginx"
# define sticky session annotations as describe here:
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: echo-server
http:
paths:
- path: /
backend:
serviceName: echo-server
servicePort: 8080
As you can see, the only difference is the presence of the directive 'path: /' in the second one:
$ diff ingress-without-path.yml ingress-with-path.yml
22c22,23
< - backend:
---
> - path: /
> backend:
First, let's use the Ingress without the "path" directive:
$ kubectl apply -f ingress-without-path.yml
ingress "echo-server" created
As we can see, there is no "Set-Cookie" header in the response:
$ curl -I -XGET http://echo-server:30080
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Thu, 01 Feb 2018 15:10:38 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Now, let's use the Ingress with the "path" directive:
$ kubectl delete ing echo-server
ingress "echo-server" deleted
$ kubectl apply -f ingress-with-path.yml
ingress "echo-server" created
The "Set-Cookie" header is correcly set!
$ curl -I -XGET http://echo-server:30080
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Thu, 01 Feb 2018 15:10:53 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Set-Cookie: route=7df3b7d7fb18d7cb908aad7837dbbfcb600cb7d7; Path=/; HttpOnly
Referring to the official Kubernetes documentation (https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting), it should be possible to define an Ingress without the "path" directive :
The following Ingress tells the backing loadbalancer to route requests based on the Host header.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
That's why my Ingress ingress-without-path.yml is valid.
And so, the bug is located in the "ingress-nginx" project.
Thank you for finding this bug! I can finally run a multi-pod cluster with meteor node framework without having problems with image uploads. Images where being uploaded to all nodes but now with session affinity I am without problems.
I can reproduce this bug in quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
Also, if I define 'path'.. session affinity works fine :(
@sylmarch why do you need to have the default backend deployments and service? seems like a waste of resources.
@gWOLF3 it might be useful if your Ingress handles multiple hosts and when all requests for one specific host are routed to only one webapp. So path: / instruction should be implicit in this case.
Most helpful comment
Finally, I find why my sample configuration does not work!
I confirm there is a bug with the Nginx IngressController.
The issue is related to the definition of the Ingress and the absence of the directive "path:".
Hereby 2 Ingresses that illustrate the different behaviors :
ingress-without-path.yml (sticky session does not work):
ingress-without-path.yml (sticky session works):
As you can see, the only difference is the presence of the directive 'path: /' in the second one:
First, let's use the Ingress without the "path" directive:
As we can see, there is no "Set-Cookie" header in the response:
Now, let's use the Ingress with the "path" directive:
The "Set-Cookie" header is correcly set!
Referring to the official Kubernetes documentation (https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting), it should be possible to define an Ingress without the "path" directive :
That's why my Ingress ingress-without-path.yml is valid.
And so, the bug is located in the "ingress-nginx" project.