What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
NGINX Ingress controller version:
v0.16.2
Kubernetes version (use kubectl version):
v1.9.6
Environment:
AWS
What happened:
With this ingress that creates an ELB handling TLS termination.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "#snip"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: http
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: ingress-nginx
type: LoadBalancer
And these nginx settings asking for force-ssl-redirect
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
data:
client-body-buffer-size: 32M
hsts: "true"
proxy-body-size: 1G
proxy-buffering: "off"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
server-tokens: "false"
force-ssl-redirect: "true"
upstream-keepalive-connections: "50"
use-proxy-protocol: "true"
requesting http://example.com will result in a 308 redirect loop. with force-ssl-redirect: false it works fine, but no http -> https redirect.
What you expected to happen:
I expect http://example.com to be redirected to https://example.com by the ingress controller.
How to reproduce it (as minimally and precisely as possible):
Spin up an example with the settings above, default backend, ACM cert and dummy Ingress for it to attach to. then attempt to request the http:// emdpoint.
Hi folks, still experiencing this issue in 0.17.1 it seems
I am seeing this issue as well. Could the destination port provided via the PROXY protocol not be used to determine if the incoming connection was made over HTTP or HTTPS?
When using the L7/HTTP ELB the X-Forwarded-Proto header is used to determine this: https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl#L253-L257.
Same situation SSL terminating at ELB using ACM cert.
After thinking about this over the weekend I got it to work this morning.
I had my ELB setup to the wrong protocol, I had it set to TCP and SSL... It needs to be HTTP and HTTPS.
So...
Make sure the ELB is set to load balance the HTTP and HTTPS protocols, not SSL or TCP, etc...
Double check that both HTTP and HTTPS balance to the same internal port.
Set your SSL Cert on your HTTPS 443 Load Balancer Port
in your nginx configmap:
proxy proto : false
force-ssl-redirect: true
Example below:
"apiVersion": "v1",
"metadata": {
"name": "nginx-configuration",
"namespace": "ingress-nginx",
"selfLink": "/api/v1/namespaces/ingress-nginx/configmaps/nginx-configuration",
"uid": "c8eddbd7-a17a-11e8-a3e5-12ca8f067004",
"resourceVersion": "1265268",
"creationTimestamp": "2018-08-16T17:35:36Z",
"labels": {
"app": "ingress-nginx"
}
},
"data": {
"client-body-buffer-size": "32M",
"force-ssl-redirect": "true",
"hsts": "true",
"proxy-body-size": "1G",
"proxy-buffering": "off",
"proxy-read-timeout": "600",
"proxy-send-timeout": "600",
"redirect-to-https": "true",
"server-tokens": "false",
"ssl-redirect": "true",
"upstream-keepalive-connections": "50",
"use-proxy-protocol": "false"
}
}
thank you @boxofnotgoodery this works for us!
Let me share my settings that finally work. "redirect-to-https": "true", does not seem to be needed. Thank you @boxofnotgoodery .
In ConfigMap:
data:
client-body-buffer-size: 32M
hsts: "true"
proxy-body-size: 1G
proxy-buffering: "off"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
server-tokens: "false"
ssl-redirect: "true"
force-ssl-redirect: "true"
upstream-keepalive-connections: "50"
use-proxy-protocol: "false"
Also in Service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
I think the suggestion above misses the point a bit (at least for my use case) as it will mean you cannot use web sockets and gRPC when the ELB runs in HTTP mode. It will have to run in TCP/SSL mode (ideally with the proxy protocol) in order for those features to be supported.
I agree with Tenzer, I am also trying to enable force ssl redirect while using websockets and get a 308 redirect loop when enabled. Currently I cannot enable ssl redirect until there is a fix for this. If anyone has a suggestion please let me know.
Same problem. I have to use WebSocket, so I'm not able to use HTTP and HTTPS in ELB ports, only TCP.
This also happens to me on 0.19 when we have an ELB on TCP to use with web-sockets it results in redirect loop similar to @Tenzer @dthomason and @okgolove .
I've fixed it using this answer:
https://stackoverflow.com/a/51936678/2956620
Looks like a crutch, but it works :)
Same issue for me
Same issue is happening
I gave the workaround in the Stack Overflow post a try and got it working as well. I'll try and point out the differences you have to make for the workaround to make it a bit more clear and why it works.
I'll start with the configuration of the service/load balancer/ELB:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
# Specify SSL certificate to use
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:[...]
# Use SSL on the HTTPS port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
# We are using a target port of 8080 here instead of 80, this is to work around
# https://github.com/kubernetes/ingress-nginx/issues/2724
# This goes together with the `http-snippet` in the ConfigMap.
targetPort: 8080
- name: https
port: 443
targetPort: http
Three things to point out here:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol annotation.X-Forwarded-Proto header with the requests.In the nginx-configuration ConfigMap I ended up with this:
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-proxy-protocol: "true"
# Work around for HTTP->HTTPS redirect not working when using the PROXY protocol:
# https://github.com/kubernetes/ingress-nginx/issues/2724
# It works by getting Nginx to listen on port 8080 on top of the standard 80 and 443,
# and making any requests sent to port 8080 be reponded do by this code, rather than
# the normal port 80 handling.
ssl-redirect: "false"
http-snippet: |
map true $pass_access_scheme {
default "https";
}
map true $pass_port {
default 443;
}
server {
listen 8080 proxy_protocol;
return 308 https://$host$request_uri;
}
This does the following:
use-proxy-protocol line.http-snippet contains two bits of Nginx configuration. The map statement is used to overrule the value that $pass_access_scheme otherwise get set here:map configured in the http-snippet is injected further down in the Nginx configuration, and tricks Nginx into thinking all connections were made over HTTPS.server directive sets up Nginx to listen on port 8080 as well as port 80, and any request made to that port will receive a 308 (Permanent Redirect) response, forwarding them to the HTTPS version of the URL.An extra thing I changed which wasn't mentioned on Stack Overflow, was that I changed the ports section of the Deployment from this:
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
to this:
ports:
- name: http
containerPort: 80
- name: http-workaround
containerPort: 8080
This is in order to make the ports the Kubernetes pod accepts connections on to match what we need.
I hope this is useful for other people. I don't know if it would be worth adding to the documentation somewhere or if it could inspire a more slick workaround.
@Tenzer Thank you so much. Been stuck in this for a couple days, your solution works like a charm.
I have just updated my previous comment and added the following to the http-snippet:
map true $pass_port {
default 443;
}
I found this to be necessary in order for Jenkins to not complain about the port number it received in the X-Forwarded-Port not matching what the client was seeing, so just a minor thing.
Thanks a ton @Tenzer. This is a great solution for anyone using a TCP/SSL load balancer that still wants HTTP redirects. Our data scientists will be very happy to have Jupyterhub, which requires web sockets, up and running in the k8s cluster.
@trueinviso Thank you! I have a similar setup (terminating TLS at the load balancer) and reverting from 0.22.0 to 0.21.0 fixed the infinite redirect loop for me.
@trueinviso If you aren't using the PROXY protocol (with use-proxy-protocol: "true" in the config map) then your issue isn't related to what this GitHub issue is about.
@Tenzer I'll move it to the other issue referencing the 308 redirect.
@Tenzer Thank you so much. Been stuck in this for a couple days, your solution works like a charm.
Same here. Thanks @Tenzer!
@Tenzer 's workaround worked like magic for us, till we've tried to upgrade nginx image to version 0.22.0
I belive some work regrading "use-forwarded-headers" settings was merged to that version, and might be the cause.
I'll appreciate any help with that, as this is blocking us from upgrading...
@assafShapira could you please provide some more information on what behaviour you are seeing with version 0.22.0? I'm using the workaround with an Nginx ingress controller on version 0.22.0 and I'm not aware of any problems with it.
sorry, worng version number.
it works well on 0.22.0 and on 0.23.0
it brakes on 0.24.0
I can also confirm it's not working on 0.24.1 and 0.25.0
Okay, but the question of how it breaks is still left.
I'm getting into 308 loop
and in the browser, I'm getting ERR_TOO_MANY_REDIRECTS
I've tried to upgrade an Nginx ingress controller to both version 0.24.0, 0.24.1 and 0.25.0 and from what I can see the problem is the X-Forwarded-Port and X-Forwarded-Proto are respectively set to "80" and "http", meaning the backend server may think (if it checks these) that the request was served over HTTP, when it actually reached the AWS ELB over HTTPS. This is what the following code block in the original work around was fixing:
http-snippet: |
map true $pass_access_scheme {
default "https";
}
map true $pass_port {
default 443;
}
This work around doesn't work any more as the two maps aren't used further down in the generated config file. Each server {} block instead has a list of variables which are set based on variables provided by Nginx:
https://github.com/kubernetes/ingress-nginx/blob/28cc3bb5e2f147d79f2fa7852838afbe9974a020/rootfs/etc/nginx/template/nginx.tmpl#L816-L820
These are then used inside the location / {} block to set the headers sent to the backend:
https://github.com/kubernetes/ingress-nginx/blob/28cc3bb5e2f147d79f2fa7852838afbe9974a020/rootfs/etc/nginx/template/nginx.tmpl#L1205-L1206
I've tried various ways to attempt to change the value of these headers so the port number instead if 443 and protocol is "https" but to no avail:
location-snippet in the ConfigMap: this is the most promising but I've only been able to append values onto the headers, not replace them.server-snippet in the ConfigMap.http-snippet in the ConfigMap.nginx.ingress.kubernetes.io/server-snippet in the Ingress definition.I have both tried to set the $pass_port and $pass_access_scheme variables to other values, used proxy_set_header to send other values to the backend and even more_set_input_headers from OpenResty: https://github.com/openresty/headers-more-nginx-module. None of them seems to have any effect on the passed headers which seem odd to me.
In a test Nginx instance I tried to create a minimal configuration to create a test case for this, but I haven't been able to reproduce it:
events {}
http {
server {
listen 8081;
set $pass_access_scheme $scheme;
# Position 1
location / {
# Position 2
set $pass_access_scheme https;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_pass http://127.0.0.1:8080;
# Position 3
}
# Position 4
}
}
Regardless of which of the four positions noted by comments I can put in set $pass_access_scheme https; and the backend server will get X-Forwarded-Proto: https sent as a request header.
As long as we haven't got a way to overrule the X-Forwarded-Port and X-Forwarded-Proto headers sent to the backend, I'm not sure the workaround will work in ingress-nginx versions newer than 0.23.0 :(
I'd be very interested to hear if anybody else can come up with a work around for changing the values of those headers.
Oh, one possible solution I contemplated but discarded was to copy the Nginx configuration template file and make the necessary alterations directly to that, but it seems like overkill for changing the values of two headers and like a fragile solution: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/.
Thank you very much(!) for the detailed explenation
I agree that completely overriding the Nginx configuration template is an overkill
I'll try to dig into that farther.
I'll update if I'll find some solution
@Tenzer thaks a lot , both for the original solution, and for the info
Unsure if this helps in your case but this fixed the issue for me: https://github.com/kubernetes/ingress-nginx/issues/1957#issuecomment-462826897
That would unfortunately not help in this case as the AWS ELB doesn't generate any headers when the PROXY protocol is in use.
@Tenzer QQ about your solution above: isn't the Service using NodePorts?
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
# We are using a target port of 8080 here instead of 80, this is to work around
# https://github.com/kubernetes/ingress-nginx/issues/2724
# This goes together with the `http-snippet` in the ConfigMap.
targetPort: 8080
- name: https
port: 443
targetPort: http
I'm having something like:
ports:
- name: http
nodePort: 32056
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30463
port: 443
protocol: TCP
targetPort: http
What to do in this case?
Ok, I managed to make it work (using the nginx-ingress helm chart).
If anyone's curious what helm chart values you need, here's what I have, based on @Tenzer comment:
controller:
service:
targetPorts:
https: http
http: 9000
annotations:
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ ADD SSL CERT HERE }}
config:
use-proxy-protocol: "true"
# Work around for HTTP->HTTPS redirect not working when using the PROXY protocol:
# https://github.com/kubernetes/ingress-nginx/issues/2724
# It works by getting Nginx to listen on port 9000 on top of the standard 80 and 443,
# and making any requests sent to port 9000 be reponded do by this code, rather than
# the normal port 80 handling.
ssl-redirect: "false"
http-snippet: |
map true $pass_access_scheme {
default "https";
}
map true $pass_port {
default 443;
}
server {
listen 9000 proxy_protocol;
return 307 https://$host$request_uri;
}
@costimuraru do you mind posting an example of what an ingress using this setup would look like? I can't get this to work in 0.26.1
Edit
I was testing with an NLB and that was not working. Tested the above settings with an ELB and it worked. Any guidance on getting it to work with an NLB?
@costimuraru 's solution worked for me, however it's passing x-forwarded-proto: http and X-Forwarded-Port: 80. I would expect to see https and 443. Is anyone else experiencing this?
@mjhuber been battling with this exact same issue today. Because of this, our spring boot app behind nginx, thinks is receives http calls and performs redirects to http instead of https
No luck so far...
@mjhuber Yes, see my comment further up: https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-511891970
I'm struggling a bit with workarounds proposed by @Tenzer and @costimuraru: setting nginx like that means that every call (even HTTPS one) is redirected with 308 status. In case of some clients redirects cause them to drop headers (e.g. Authorization - not to send it to unknown redirect) which effectively breaks communication with services behind nginx working this way.
Hi, @costimuraru ,
I wonder to know which chart version do you use ?
About the port number 9000, do you open another port 9000 in deployment.spec.template.spec.containers[].ports ?
On chart: 1.27.0
Image: 0.26.1
I have values like this
---
controller:
image:
repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
tag: "0.26.1"
config:
ssl-redirect: "false"
force-ssl-redirect: "false"
use-proxy-protocol: "true"
http-snippet: |
server {
listen 8080 proxy_protocol;
return 308 https://$host$request_uri;
}
service:
labels:
access: "true"
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
CERT-ARN
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
targetPorts:
http: 8080
https: 80
It works for me.
Using 0.25.0
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: somebucket
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: someprefix
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:23432423423:certificate/6faf394b-ee43-4dfd8-dfafd-234532142314
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx
namespace: ingress-nginx
spec:
clusterIP: 10.255.5.144
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31222
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31333
port: 443
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
sessionAffinity: None
type: LoadBalancer
Ingress for app
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: dinghy-ping.mydomain.net
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true" <-- causes 308 redirect issue
labels:
app: dinghy-ping
chart: dinghy-ping-0.2.1
release: dinghy-ping
name: dinghy-ping
namespace: kube-addons
spec:
rules:
- host: dinghy-ping.mydomain.net
http:
paths:
- backend:
serviceName: dinghy-ping
servicePort: http
path: /
$ curl dinghy-ping.mydomain.net
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
Simply removing the force-ssl-redirect or setting to false in app's ingress and it works
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
Ideally, we would like to keep the force-ssl-redirect in place.
I just found that ssl-redirect works.
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Not sure what the difference is between force-ssl-redirect and ssl-redirect they seem to have similar behavior
I have a solution for NLB
ingress-nginx value
controller:
config:
ssl-redirect: "false" # we use `special` port to control ssl redirection
server-snippet: |
listen 8000;
containerPort:
http: 80
https: 443
special: 8000
service:
targetPorts:
http: http
https: special
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
And add this annotation to your app
ingress:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
This will create port 8000 on nginx pod and service will use this port for https request. On server-snippet just check if port is 80 which also 80 on NLB, it will response status 308 and if other ports, it will do nothing
Finally this worked for me 1.27.0 chart.
Thanks to everyone here, it was a mix of everything from above..
```
controller:
ingressClass: "
replicaCount: 1
containerPort:
http: 80
https: 443
special: 8000
config:
ssl-redirect: "false"
force-ssl-redirect: "false"
use-proxy-protocol: "true"
server-snippet: |
listen 8000;
http-snippet: |
server {
listen 8000 proxy_protocol;
return 307 https://$host$request_uri;
}
updateStrategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
type: RollingUpdate
service:
targetPorts:
http: http
https: special
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' ## To let loadbalancer know, so It sends the source IP address to ingress.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
type: "LoadBalancer"
extraArgs:
enable-ssl-chain-completion: "false"
v: 1
rbac:
create: true
serviceAccount:
create: true
@annapurnam does your redirect propagate X-Forwarded-For headers? Mine seem to be dropped
due to the http-snippet:
server {
listen 8000 proxy_protocol;
return 307 https://$host$request_uri;
}
@BobbyJohansen My solution will not use X-Forwarded headers. Because NLB does not has them. I'm using only server port 8000 (special)
I have a solution for NLB
ingress-nginx value
controller: config: ssl-redirect: "false" # we use `special` port to control ssl redirection server-snippet: | listen 8000; containerPort: http: 80 https: 443 special: 8000 service: targetPorts: http: http https: special annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn" service.beta.kubernetes.io/aws-load-balancer-type: "nlb"And add this annotation to your app
ingress: annotations: nginx.ingress.kubernetes.io/server-snippet: | if ( $server_port = 80 ) { return 308 https://$host$request_uri; }This will create port
8000on nginx pod and service will use this port for https request. Onserver-snippetjust check if port is80which also80on NLB, it will response status 308 and if other ports, it will do nothing
I've just tested this solution with EKS 1.15 and works very well. Thank you @KongZ
Using the latest helm chart here is what I did to make it work
controller:
config:
ssl-redirect: "false" # we use `special` port to control ssl redirection
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
containerPort:
http: 80
https: 443
special: 8000
service:
targetPorts:
http: http
https: special
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE>"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
I opened a PR and put my solution to nginx ingress controller helm chart. So it will be general available for everyone :)
When using the special targetPort, it is forwarding 8000 as the port instead of 443 which causes issues with some services that are using X-Forwarded-Port.
@walkafwalka the special port is required only for environment which require Load Balancer L4 such as AWS Network Load Balancer. The L4 does not provide X-Forwarded headers that why I came up a solution to use $server_port. If you are using L7 Load Balancer or other environments which X-Forwarded headers available, then you do not need any special configuration. The default nginx ingress controller already detect the headers and working properly.
ssl-redirect: "false" # we use `special` port to control ssl redirection server-snippet: | listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; }
Did you set externalTrafficPolicy: "Local" or "Cluster" ?
And do I need to annotate every deployment ingress also with:
nginx.ingress.kubernetes.io/server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
?
@dardanos
@ssh2n Local or Cluster are not matter for ssl-redirection.
If you want all services to have ssl-redirection, you just put this on server-snippet
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
But if you prefer to select which services are required ssl-redirection, then you need only
listen 8000;
And leave the 308 redirection to nginx.ingress.kubernetes.io/server-snippet annotation
controller.config.server-snippet will add config to all nginx server while nginx.ingress.kubernetes.io/server-snippet annotation will add to only annotated server
Hey!
I'm new to Kubernetes and so I'm not familiar with helm.
Fortunately after banging my head a bit I was able to adapt the great solutions in here to a regular yaml config.
Sharing in case it saves someone else some time!
# Download the mandatory nginx config as per usual
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
# Add a container port to the deployment that comes with the nginx mandatory yaml file
# This port will be used for redirecting as @Kongz describes above
kubectl patch deployment -n ingress-nginx nginx-ingress-controller --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/ports/-","value":{"name": "https-to-http", "containerPort": 8000, "protocol": "TCP"}}]'
Then for my production overlay I use patchesStrategicMerge to add this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
# https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295
nginx.ingress.kubernetes.io/server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
ports:
# https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295
# https://superuser.com/a/1519548
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https-to-http
I'm just patching because my prod environment needs redirect but my development one does not - if yhat isn't a concern you can just drop the above straight into your service.
I got stuck on this for a very long time as a beginner - so if you're reading this and you're also stuck feel free to ping!
@KongZ Right, I am using an NLB. However, the NGINX controller is still a L7 reverse proxy that forwards its own X-Forwarded-* headers. Here is a snippet from my NGINX:
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
...
proxy_set_header X-Forwarded-Port $pass_port;
And because we are serving HTTPS over port 8000, it is forwarding the port as 8000 instead of 443.
@ssh2n every deployment you do with helm should have the annotation.
Thanks @dardanos, that was a bit confusing, so I switched back to the classic L7 setup :)
@walkafwalka, ran into the same issue as you with apps which depend on X-Forwarded-Port. Solution below sets proxy_port instead of server_port which comes by default. In my case Jenkins with Keycloak redirection had port 8000. This solved it:
location-snippet: |
set $pass_server_port $proxy_port;
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
ssl-redirect: "false"
I came across this although I am NOT using service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*".
I have an NLB listening on HTTPS & HTTP which forwards requests as HTTP to NGINX which I have configured in turn to forward all traffic to port http (80).
My ingress is configured with "nginx.ingress.kubernetes.io/force-ssl-redirect": "true" for SSL redirection and I am getting stuck in a redirect loop.
The issue was closed without recommending what workaround to apply in which context.
It also doesn't mention whether or how it will be addressed without a workaround.
For my specific case, I assume that because I am NOT using service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*", NGINX keeps thinking it should redirect. But, even when I do configure service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" it still gets stuck in a redirect.
The nginx.com website documents an annotation that I haven't seen mentioned elsewhere, namely nginx.org/redirect-to-https and even with that, things didn't work for me.
Also having service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" for my NLB doesn't seem to enable ProxyProtocol v2 on the listeners but I haven't tested it with an ELB.
So in total I have two issues:
My configuration looks like this using Helm charts:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: whatever
namespace: default
spec:
chart:
repository: https://kubernetes.github.io/ingress-nginx/
name: ingress-nginx
version: 2.11.1
values:
config:
proxy-real-ip-cidr:
- "10.2.0.0/20"
controller:
service:
targetPorts:
https: http
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<AWS_ARN>"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
and the ingress object itself:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: whatever
namespace: default
spec:
rules:
- host: <CUSTOM_HOST>
http:
paths:
- path: /
backend:
serviceName: <WHATEVER>
servicePort: 80
Appreciate kindly your advise.
@abjrcode It is on my answer. It is the complete solution. Just configure ingress-nginx value file and your app ingress according to my coment.
https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295
Thank you @KongZ for your suggestion. I will provide some more guidance for people coming across this and more options as I had a chance to take a thorough look at the code.
There are two choices for load balancers, at least when it comes to AWS.
I am assuming you want to terminate TLS at the load balancer level and we're dealing strictly with HTTPS & HTTP. If you are interested in TCP, UDP then please check this insightful comment on this very issue.
ELB (although Classis and will be completely deprecated at some point), probably for historical reasons, actually forwards the X-Forwarded-* headers.
The NGINX controller actually supports and can do redirection based on those headers. Here's how your configuration would look like with Helm:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: <RELEASE_NAME>
namespace: <NAMESPACE>
spec:
chart:
repository: https://kubernetes.github.io/ingress-nginx/
name: ingress-nginx
version: 2.11.1
values:
config:
ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
use-forwarded-headers: "true" # NGINX will now decide whether it will do redirection based on these headers
controller:
service:
targetPorts:
https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: <INGRESS_NAME>
namespace: <NAMESPACE>
spec:
rules:
- host: <CUSTOM_HOST>
http:
paths:
- path: /
backend:
serviceName: <WHATEVER>
servicePort: <SOME_PORT>
There are two choices when it comes to NLBs. Unfortunately, at least from my point of view, the preferred option isn't available at the time of this writing because of this open issue
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: <RELEASE_NAME>
namespace: <NAMESPACE>
spec:
chart:
repository: https://kubernetes.github.io/ingress-nginx/
name: ingress-nginx
version: 2.11.1
values:
config:
ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
use-proxy-protocol: "true" # NGINX will now decide whether it will do redirection based on these headers
controller:
service:
targetPorts:
https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: <INGRESS_NAME>
namespace: <NAMESPACE>
spec:
rules:
- host: <CUSTOM_HOST>
http:
paths:
- path: /
backend:
serviceName: <WHATEVER>
servicePort: <SOME_PORT>
Please check @KongZ comment on this issue.
Thanks @KongZ it works fine with NLB
Here is the changes I did for the one who does not use charm.
I deployed ingress-nginx from https://github.com/kubernetes/ingress-nginx/blob/controller-v0.34.1/deploy/static/provider/aws/deploy.yaml
kubectl edit configmaps -n ingress-nginx ingress-nginx-controllerdata section does not exist by default)data:
server-snippet: |
listen 8000;
ssl-redirect: "false"
Complete configmap as a reference:
apiVersion: v1
data:
server-snippet: |
listen 8000;
ssl-redirect: "false"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":null,"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/i
nstance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.34.1","helm.sh/chart":"
ingress-nginx-2.11.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
creationTimestamp: "2020-08-03T17:29:25Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.34.1
helm.sh/chart: ingress-nginx-2.11.1
name: ingress-nginx-controller
namespace: ingress-nginx
kubectl edit deployments -n ingress-nginx ingress-nginx-controller
Add the following lines in ports: section
- containerPort: 8000
name: special
protocol: TCP
More lines from deployments.
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8000
name: special
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
When you save&exit deployments, it will create new ingress-nginx pod.
finally, add the following annotations lines into your app ingress
nginx.ingress.kubernetes.io/server-snippet: |
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
Complete app ingress yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apple-ingress
namespace: apple
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
spec:
rules:
- host: apple.mydomain.com
http:
paths:
- path: /
backend:
serviceName: apple-service
servicePort: 5678
then update it kubectl apply -f ingress-apple.yml
And let's test it
$ curl -I http://apple.mydomain.com
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:47:59 GMT
Content-Type: text/html
Content-Length: 171
Connection: keep-alive
Location: https://apple.mydomain.com/
$ 聽curl -I https://apple.mydomain.com
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:48:20 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 15
Connection: keep-alive
X-App-Name: http-echo
X-App-Version: 0.2.3
Most helpful comment
I gave the workaround in the Stack Overflow post a try and got it working as well. I'll try and point out the differences you have to make for the workaround to make it a bit more clear and why it works.
I'll start with the configuration of the service/load balancer/ELB:
Three things to point out here:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocolannotation.X-Forwarded-Protoheader with the requests.In the
nginx-configurationConfigMap I ended up with this:This does the following:
use-proxy-protocolline.http-snippetcontains two bits of Nginx configuration. Themapstatement is used to overrule the value that$pass_access_schemeotherwise get set here:https://github.com/kubernetes/ingress-nginx/blob/da32401c665c646954f79b61e9aa60ac562eb7b7/rootfs/etc/nginx/template/nginx.tmpl#L290-L294
This was necessary for me as some applications behind the ingress controller needed to know if they were served over HTTP or HTTPS - either so they could enforce being served over HTTPS, or in order to be able to generate correct URLs for links and assets.
The
mapconfigured in thehttp-snippetis injected further down in the Nginx configuration, and tricks Nginx into thinking all connections were made over HTTPS.The
serverdirective sets up Nginx to listen on port 8080 as well as port 80, and any request made to that port will receive a 308 (Permanent Redirect) response, forwarding them to the HTTPS version of the URL.An extra thing I changed which wasn't mentioned on Stack Overflow, was that I changed the
portssection of the Deployment from this:to this:
This is in order to make the ports the Kubernetes pod accepts connections on to match what we need.
I hope this is useful for other people. I don't know if it would be worth adding to the documentation somewhere or if it could inspire a more slick workaround.