NGINX Ingress controller version: 0.30.0
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-502bfb", GitCommit:"502bfb383169b124d87848f89e17a04b9fc1f6f0", GitTreeState:"clean", BuildDate:"2020-02-07T01:31:02Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Environment:
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
uname -a): Linux ip-192-168-37-119.us-west-2.compute.internal 4.14.146-119.123.amzn2.x86_64 #1 SMP Mon Sep 23 16:58:43 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
I created a NLB on AWS using a Service type LoadBalancer with the following configuration:
# Creates a NetworkLoadBalancer in AWS
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
# by default the type is elb (classic load balancer).
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
# this setting is to make sure the source IP address is preserved.
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: https
port: 443
targetPort: https
This creates an NLB with a listener on port 443 and an SSL certificate created by AWS Certificate Manager for foo.bar.com. It directs traffic to an ingress controller with the following configs:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
forwarded-for-header: "X-Forwarded-For"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
I also deployed the following Ingress for my application:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: bolt
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- foo.bar.com
rules:
- host: foo.bar.com
http:
paths:
- path: /job/submit
backend:
serviceName: server
servicePort: 80
- path: /job/status
backend:
serviceName: server
servicePort: 80
- path: /job/logs
backend:
serviceName: server
servicePort: 80
- path: /health
backend:
serviceName: server
servicePort: 80
When a make a request to the /health endpoint I get the following:
* Trying 1.2.3.4...
* TCP_NODELAY set
* Connected to foo.bar.com (1.2.3.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=foo.bar.com
* start date: Mar 3 00:00:00 2020 GMT
* expire date: Apr 3 12:00:00 2021 GMT
* subjectAltName: host "foo.bar.com" matched cert's "foo.bar.com"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET /health HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Server: nginx/1.17.8
< Date: Tue, 03 Mar 2020 23:05:54 GMT
< Content-Type: text/html
< Content-Length: 255
< Connection: close
<
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.17.8</center>
</body>
</html>
* Closing connection 0
What you expected to happen:
I would expect to get a 200 response.
I don't know.
How to reproduce it:
Anything else we need to know:
/kind bug
Related to #5051
The NLB is taking care of ssl termination. You are sending unencrypted data to nginx on a 443 ssl-enabled port.
You can fix your problem following this pattern: https://github.com/kubernetes/ingress-nginx/pull/5374

But anyway, I don't think there is an easy/any way to setup ngrinx-ingress with an NLB that is dealing with ssl-termination and have the proper headers X-Forwarded-Port/Proto set to 443/https.
I recommended using classic load balancer and terminate SSL at AWS classic load balancer
NLB doesn't terminate TLS.
It needs to be done on ingress level.
I had the same problem and I fixed it by changing the ingress-nginx service targetport for https from https -> http.

For anyone stuck, I got it to work with the stable/nginx-ingress helm chart with these configurations right out of the box: https://github.com/helm/charts/tree/master/stable/nginx-ingress#aws-l4-nlb-with-ssl-redirection
NLB supports TLS termination https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/ (24 JAN 2019)
I don't get why ingress-nginx documentation refers to ALB to terminate TLS at https://kubernetes.github.io/ingress-nginx/deploy/#aws
I think the documentation is outdated and written when NLB does not support TLS
I had the same issue in the subject and fixed by changing https to http for port 443
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: http
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Most helpful comment
I had the same problem and I fixed it by changing the ingress-nginx service targetport for https from https -> http.