0.25.0 introduced a regression causing issues with ExternalName services: https://github.com/kubernetes/ingress-nginx/issues/4324
I upgraded from 0.24.1 to 0.26.1 however I am still getting issues with ExternalName services.
Service:
{
"kind": "Service",
"apiVersion": "v1",
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 443
}
],
"type": "ExternalName",
"sessionAffinity": "None",
"externalName": "xxx"
}
}
Ingress:
"annotations": {
"nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
"nginx.ingress.kubernetes.io/upstream-vhost": "xxx"
}
2019/10/04 13:11:13 [error] 2258#2258: *728312 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 3.4.5.6, server: yyy, request: "POST /bar HTTP/1.1", upstream: "https://1.2.3.4:80/bar", host: "xxx"
From what I can see, even though service target has port 443, nginx is trying to connect to port 80.
Also, I was using the below to support SNI:
proxy_ssl_server_name on;
proxy_ssl_name $proxy_host;
From what I can see $proxy_host variable has changed since 0.25.x or 0.26.x. Can you confirm how to best get upstream host name for SNI functionality to work?
This is the commit that broke it - https://github.com/kubernetes/ingress-nginx/commit/c7d2444cf4a9eef81aed3ff05728753d3e0889d7#diff-980db9e4b88704f12338bd074839f94e
How did it even pass the tests, wth?
Just took down couple of our services cause of this.
Previously the UPSTREAM PORT was set to targetPort of a Service, now it's set to ing.Port aka Ingress Port, wth?
It should be using a port specified as targetPort in your Service that Ingress is reaching towards.
@aledbf
This is now horribly broken, at all cases the expectation is now to have your Ingress Port match 1:1 your Service Port.
If that's the intention, please add this is to Readme.
@ElvinEfendi Can you kindly take a look, and confirm if the issue is a regression?
@ElvinEfendi @aledbf
Any updates please?
@michaelgeorgeattard someone needs to step in and start helping. Please check https://github.com/kubernetes/ingress-nginx/issues/4404
@aledbf For the sake of completeness, replicated issue in 0.27.0.
Can you kindly mark this issue as bug so it can be tackled eventually?
/kind bug
I wonder when its going to be fixed since 0.29 is also out. We have our services going down due to this. Is there any way to prioritise this.
Is there any way to prioritise this.
Yes, there is. If someone can provide a way to reproduce this, something like this
https://gist.github.com/aledbf/266940de7569a1163b9e1c085aa4e771 helps with the hardest part.
From the definition:
"name": "http", "protocol": "TCP", "port": 80, "targetPort": 443
"nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
In this case, the port should be 443, not 80. What's the point here using 80 as port?
Try it with any external name, Here is what we are facing:-
Nginx ingress controller version- 0.26.2
Ingress file-
```apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: presto-external-ingress
namespace: xxxx
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
#certmanager.k8s.io/cluster-issuer: "letsencrypt-xxxxx"
nginx.ingress.kubernetes.io/proxy-body-size: 200m
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: xxxx-basic-secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
tls:
- hosts:
- dev-admin-query.xxx.com
secretName: xxxx
rules:
- host: dev-admin-query.xxx.com
http:
paths:
- path: /
backend:
serviceName: presto-proxy-service
servicePort: 80
Service:-
```apiVersion: v1
kind: Service
metadata:
name: presto-proxy-service
namespace: xxxx
resourceVersion: "19040038"
spec:
externalName: 192.168.xx.xxx
ports:
- port: 80
protocol: TCP
targetPort: 8080
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
We are getting a 502 status code on the url now. It used to work before nginx upgrade.
@aledbf
We need to confirm if its fixed in the latest release, else we will just downgrade our ingress controller.
Thanks a lot @aledbf . That was the quickest fix I have seen yet.
Most helpful comment
/kind bug