NGINX Ingress controller version:
0.26.1
Kubernetes version (use kubectl version):
client: 1.14.7
server: 1.14.10-gke.0
Environment:
uname -a): 4.14.138+What happened:
tl;dr
I'm getting a 502 Bad Gateway when trying to talk to a TLS-only service that sits behind an nginx ingress controller. In the nginx logs, I see the error upstream SSL certificate does not match "upstream_balancer" while SSL handshaking to upstream
More detail
I have an internal ingress sitting between two backend services C (the "client" service that sends requests) and S (the "server" service that handles those requests). I'm using the nginx ingress controller for this ingress to get consistent hashing routing from C to S based on an HTTP header provided by C.
/-> S pod 1
C --> nginx ingress -|
\-> S pod 2
Including the Kubernetes service objects, here's what the network looks like:
C pod --> "sticky S" svc --> nginx ingress controller svc --> nginx ingress controller pod --> S pod
I want to use TLS between C and S. Since I'm relying on the nginx ingress controller to read an HTTP header, I can't just use TLS passthrough. I need to have the nginx ingress controller terminate TLS from C, read the header, and then initiate a new TLS connection to S when it proxies the request.
When I curl the "sticky S" service (which routes to the ingress controller) from C, I get a 502 error:
$ curl --key $KEY_FILE --cacert $CERT_FILE "https://svc-s-sticky.default.svc.cluster.local/_status/healthz"
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
In the nginx logs, I see the following line:
2020/01/14 01:26:02 [error] 41#41: *199 upstream SSL certificate does not match "upstream_balancer" while SSL handshaking to upstream, client: 10.44.5.7, server: svc-s-sticky.default.svc.cluster.local, request: "GET /_status/healthz HTTP/2.0", upstream: "https://10.44.2.168:9001/_status/healthz", host: "svc-s-sticky.default.svc.cluster.local"
Which seems to be caused by the following line in the generated nginx conf:
proxy_pass https://upstream_balancer;
Obviously, the cert that S is serving does not match "https://upstream_balancer" - that's an nginx-internal construct and my service doesn't know anything about it. What is the suggested workaround here?
What you expected to happen:
I should be able to use TLS to an upstream service. I should get a 200 OK response when I curl the "sticky S" service. I should not have to change my certificates to claim that their hostname is "upstream_balancer".
How to reproduce it:
Here's my Ingress spec:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: s-sticky
annotations:
cloud.google.com/load-balancer-type: "Internal"
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/upstream-hash-by: "$http_x_header_name"
nginx.ingress.kubernetes.io/upstream-hash-by-subset: "false"
nginx.ingress.kubernetes.io/proxy-ssl-secret: "default/ingress-nginx-tls-key-v1"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
# this is the sticky svc that forwards to the ingress controller
- svc-s-sticky.default.svc.cluster.local
secretName: svc-s-tls-key-v2
rules:
- host: svc-s-sticky.default.svc.cluster.local
http:
paths:
- path: /
backend:
serviceName: svc-s # this is the (non-sticky) svc in front of the S pods
servicePort: 443
The sticky service looks like this:
kind: Service
apiVersion: v1
metadata:
name: svc-s-sticky
annotations:
cloud.google.com/app-protocols: '{"https": "HTTPS"}'
spec:
type: ExternalName
# this is the service in front of the ingress controller
externalName: ingress-nginx.ingress-nginx.svc.cluster.local
The generated nginx conf on the ingress controller pod is:
## start server svc-s-sticky.default.svc.cluster.local
server {
server_name svc-s-sticky.default.svc.cluster.local ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
# PEM sha: bd4963e8f770fe1af553f004941cb3962d90e194
proxy_ssl_certificate /etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem;
proxy_ssl_certificate_key /etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem;
proxy_ssl_trusted_certificate /etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem;
proxy_ssl_ciphers DEFAULT;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_verify on;
proxy_ssl_verify_depth 1;
location / {
set $namespace "default";
set $ingress_name "s-sticky";
set $service_name "svc-s";
set $service_port "443";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
header_filter_by_lua_block {
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "default-svc-s-443";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass https://upstream_balancer;
proxy_redirect off;
}
}
## end server svc-s-sticky.default.svc.cluster.local
/kind bug
Fixed it! Looks like I needed to set the proxy_ssl_name directive to tell nginx to use a custom name rather than upstream_balancer. There isn't an annotation for this in Kubernetes, so I had to use the configuration-snippet annotation:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_name "svc-s.default.svc.cluster.local";
Most helpful comment
Fixed it! Looks like I needed to set the proxy_ssl_name directive to tell nginx to use a custom name rather than
upstream_balancer. There isn't an annotation for this in Kubernetes, so I had to use the configuration-snippet annotation: