NGINX Ingress controller version:
Image:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.16
Environment variables:
POD_NAME: ingress-6np6n
POD_NAMESPACE: kube-system
Commands:
-
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--configmap=$(POD_NAMESPACE)/nginx-custom-configuration
--tcp-services-configmap=$(POD_NAMESPACE)/nginx-tcp-ingress-configmap
--enable-ssl-passthrough
Kubernetes version (use kubectl version):
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Don't have access to the server only kube
uname -a): NAWhat happened:
This works perfect for curl and chrome
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: example
http:
paths:
- backend:
serviceName: example
servicePort: 443
What you expected to happen:
until you create a webpage where you are connecting to that service and using img call's (tiles) from another service on the same kube using the same ssl certificate. Then chrome wants to recycle the http2 connection resulting in requests getting send to the wrong pod. Note that curl keeps working for both services because it doesn't try to recycle the previous curl command http2 connection. Is there a workaround for this, other than running two different kube clusters so chrome doesn't recycle the http2 connection?
How to reproduce it (as minimally and precisely as possible):
setup 2 container services on the same kube using type: ClusterIP and the same ssl certificate both using ingress.kubernetes.io/ssl-passthrough: "true" with the ingress template above and make a web page that loads from both services using http2.
When you the ingress.kubernetes.io/ssl-passthrough: "true" we just pipe the tcp incoming traffic, nginx is not involved at all. Your application should provide http2 feature/support.
Closing. This works as expected. Please check your application http2 support
How is chrome suppose to know which http2 server pod to access when both services use the same tcp ip connection to the chrome http2 client even when they have a different subdomain? To just pipe the tcp incoming connection is exactly the problem :) The ingress controller should check and say wait a minute this can't be the same tcp ip connection because it's a different subdomain and create a new connection for the other subdomain.
The ingress controller should check and say wait a minute this can't be the same tcp ip connection because it's a different subdomain and create a new connection for the other subdomain.
Unfortunately we cannot do that. The go code do not inspect the code, only the TLS Client Hello messages for the SNI Extension https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/tcp.go#L113
Ok thx, will try to do some more TLS digging to have a work around without disabling ssl-passthrough. I am sure more people will hit this issue sooner or later. Maybe with some certificate hocus pocus the chrome client does a new TLS handshake, as in every service a different certificate or something.
@gertcuykens I face the same issue and at first it was driving me nuts until I found this thread.
Now I at least know where it comes from, but I don't have a solution yet (other than restarting Chrome for each site as a workaround).
Exact same issue! I am using another ingress (haproxy), but same setup (ssl-passthrough + same certs) and traffic basically hits the same pod (to which the initial tunnel HTTP/2 session was opened). Is the only option to offload SSL?
We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames.
Unfortunately, as @aledbf makes clear: nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes. This completely breaks our applications and we needed to disable HTTP/2 on the backends to force new connections.
We face a similar issue without ssl-passthrough - same wildcard-cert but I think the cert is irrelevant.
Our setup is as follows. Our domain (example.com is served by a nginx (non-kubernetes). Behind this server are several k8s-Clusters (a.example.com, b.example.com, c example.com)
Via curl I can access every cluster. via Browsers I only can access one cluster, on the others I get a 404 default backend. I investigated and saw that if I visit a.example.com first in a browser session and go in the same session to b.example.com I see in the log of a.example.com "Host b.exanple.com not served (or something)". The connection goes to the wrong cluster.
I disabled http2 via the confimap use-http2: false and now I can access all the clusters in one browser session.
How can we fix that and use http2?
Most helpful comment
We discovered a related issue where we have multiple
ssl-passthroughupstreams that only use different hostnames.Unfortunately, as @aledbf makes clear: nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes. This completely breaks our applications and we needed to disable HTTP/2 on the backends to force new connections.