We are terminating our SSL traffic at AWS ELB. This is supported by the nginx ingress.
But we are also require the HSTS headers to be always set and HTTPS redirection enabled by default.
This can be a bit tricky, because nginx needs to know which traffic from ELB was HTTPS traffic and which traffic is HTTP traffic.
In our current setup we use a different port in nginx which has the only purpose of redirecting all traffic to the HTTPS equivalent.
server {
listen 8000 proxy_protocol;
server_tokens off;
return 301 https://$host$request_uri;
}
This enables us to route all ELB traffic on port 80 to port 8000 of ingress. And SSL terminated traffic on port 443 to port 80 of ingress. So far this did not need major changes in ingress, but it will no longer support the HSTS headers. Currently we modified the ingress-controller to always set the header (removed the conditionals). It would be great if both features could be integrated into the controller by default. This would allow us to switch back to the default nginx.tmpl.
If this is a change which is welcome I will be happy to contribute some code, but will need some assistance.
This is cool. But I think some minor things are still missing. It is still not possible to add HSTS headers when you're not using SSL termination in ingress (this would be a important feature in combination with ForceSSLRedirect).
Edit: I had a look into the code. There is one thing I would like to improve:
This "https redirect" feature is on a per location basis. This means for each ingress I have to ensure it is enabled. Ingress is controlled by our developers. If one forgets to add this option traffic could be served in plain text.
Wouldn't it be a good idea to also support https redirect and hsts in a global context?
@aledbf Could you reopen this?
When I add:
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/force-ssl-redirect: : "true"
To my Ingress config for a service, it just loops over and over and over again, because we are doing SSL termination at the ELB and therefore passing traffic from the ELB to our Ingress controller via HTTP.
@foxylion is the above issue why you've had to do what you're doing? I may end up doing the same....
@kylegato Back then the annotation did not exist. And since then I had no time to actually test the new annotation, especially because it does not solve everything (HSTS headers are missing).
For the moment I've the separate port and modified to always set the HSTS header (removed the if statements in the template).
@kylegato @foxylion I am having the same issue where SSL termination at the ELB + ingress.kubernetes.io/force-ssl-redirect: : "true" results in an infinite redirect loop. I am on version 0.9.0-beta.3. Did you ever find a solution?
@asmith60 I am just going to rely on HSTS for now.
@kylegato Main problem with that solution is, that unless the client uses HTTPS once he can still use HTTP.
Has anyone managed to get the ingress.kubernetes.io/force-ssl-redirect annotation to work with SSL termination on an ELB? Whatever I do I end up with an infinite HTTPS->HTTPS redirect loop (with 0.9-beta.8), so like others I'm having to override and modify the nginx.conf template, making updates pretty painful.
Does anyone have feedback on the preferred way to force HSTS for SSL terminated environments?
https://github.com/kubernetes/ingress/pull/917
For those experiencing the redirect loop, make sure you enabled the HTTP backend protocol in ingress service via the corresponding annotations:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
This will cause the ELB to operate in Layer 7, and thus inject the X-Forwarded-* headers required for the redirect logic to work.

Also please note the redirect won't work if you enabled Proxy Protocol as well; in this case, the ELB would need to operate on TCP/SSL (Layer 4), thus such headers won't be available to NGINX, causing the redirect to loop.
@danielfm Your solution worked for me. Thanks 馃槃
This is an example to configure ELB (L7) https://github.com/kubernetes/ingress/tree/master/examples/tls-termination/elb-nginx
We are currently also running nginx ingress behind an ELB. The ELB is responsible for SSL termination and we are running into difficulties similar to the ones discussed in this thread.
This is currently working nicely to force redirect: ingress.kubernetes.io/force-ssl-redirect.
Will 9.0 include a similar option to force hsts even when nginx isn't terminating SSL?
EDIT:
For others researching this topic I was able to get the desired behavior by adding this setting to the ingress configmap:
http-snippet: 'add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;'
I believe that this option is first available in 0.9.0-beta.14. Adjust values as necessary.
So actually there is no way to have ssl termination and redirect with proxy-protocol on L4 ELB?
@aledbf Would adding a HSTS annotation be of interest to the project? I'm willing to work on a PR that implements this feature
Would adding a HSTS annotation be of interest to the project?
Yes
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
As far as I understand, part of this ticket is done by adding force-ssl-redirect to the configmap (or ingress).
However, HSTS is not working in such cases because nginx.tmpl is explicitly asking for an SSL certificate being set:
{{ if (and (not (empty $server.SSLCertificate)) $all.Cfg.HSTS) }}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age={{ $all.Cfg.HSTSMaxAge }}{{ if $all.Cfg.HSTSIncludeSubdomains }}; includeSubDomains{{ end }}{{ if $all.Cfg.HSTSPreload }}; preload{{ end }}";
}
{{ end }}
I see two solutions here:
Introduce force-hsts configuration flag and
{{ if (or $all.Cfg.ForceHSTS (and (not (empty $server.SSLCertificate)) $all.Cfg.HSTS)) }}
however, we should also ask for $pass_access_scheme because it will either be the forwarded protocol or scheme if we're not behind a proxy, so it should always be right.
if ($pass_access_scheme = https) {
force variables (like force-ssl-redirect and the proposed force-hsts) will make things very confusing at some point./remove-lifecycle stale
Here is the summary of how we are doing TLS offloading currently (including HSTS headers):
We need a custom nginx.tmpl otherwise the upstream services will receive the wrong X-Forwarded headers. In our case this is done by using a custom built Docker image.
Our Dockerfile:
FROM quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
# We use a patch file to add our customizations to the current nginx.tmpl
# If something is no longer patchable the patch call will fail
ADD nginx.tmpl.patch /etc/nginx/template/nginx.tmpl.patch
RUN patch /etc/nginx/template/nginx.tmpl < /etc/nginx/template/nginx.tmpl.patch
````
The patch file:
```patch
diff --git a/controller/service/nginx.tmpl b/controller/service/nginx.tmpl
index 6fefb5e..4e7f4c4 100644
--- a/controller/service/nginx.tmpl
+++ b/controller/service/nginx.tmpl
@@ -712,8 +720,8 @@ stream {
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
- proxy_set_header X-Forwarded-Port $pass_port;
- proxy_set_header X-Forwarded-Proto $pass_access_scheme;
+ proxy_set_header X-Forwarded-Port 443;
+ proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
And finally our (simplified) configuration:
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx
labels:
k8s-addon: ingress-nginx.addons.k8s.io
data:
use-proxy-protocol: true
http-snippet: |
# redirect HTTP traffic to HTTPS
server {
listen 8000 proxy_protocol;
server_tokens off;
return 301 https://$host$request_uri;
}
server-snippet: |
more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
labels:
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: '$((elb-certificate-arn))'
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: 8000
- name: https
port: 443
targetPort: 80
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: ingress-nginx
labels:
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 2
minReadySeconds: 10
template:
metadata:
labels:
app: ingress-nginx
k8s-addon: ingress-nginx.addons.k8s.io
spec:
terminationGracePeriodSeconds: 300
containers:
- name: ingress-nginx
image: meisterplan/ingress-nginx-controller:$((image-tag))
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
- containerPort: 8000
protocol: TCP
lifecycle:
preStop:
exec:
command: ["sleep", "15"]
args:
- /nginx-ingress-controller
- --default-backend-service=infrastructure/ingress-error-pages
- --configmap=infrastructure/ingress-nginx
I removed a lot of configuration (we use many of the additional features), this should be the relevant parts for TLS offloading.
I hope this helps. 馃槃
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Just to let you know that I m still facing issues trying to redirect from http to https with TLS termination, I'm using nginx-ingress controller version 0.23, I already have enable use-forwarded-headers: "true" in my Config and have this annotation in my ingress rule nginx.ingress.kubernetes.io/force-ssl-redirect: "true", does anyone know if I m missing something else here?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
As @sbrandtb suggested, an option like "behind-tls-termination" would be useful. Then lots of HTTPS-related options would work as expected.
Or even better, there could be an option for Nginx to trust the "X-Forwarded-Proto", and use it to determine if the client is using HTTPS for this request.
This header is e.g. set by AWS ELB:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html
Could be e.g.
Changes:
externalTrafficPolicy: LocalWith these changes the source IP address is preserved, in case of using the TCP/UDP feature, there are no issues with proxy-protocol and the redirect to HTTPS works OOTB
Please test the deploy-tls-termination.yaml manifest and let me know of any issues.
Most helpful comment
For those experiencing the redirect loop, make sure you enabled the HTTP backend protocol in ingress service via the corresponding annotations:
This will cause the ELB to operate in Layer 7, and thus inject the
X-Forwarded-*headers required for the redirect logic to work.Also please note the redirect won't work if you enabled Proxy Protocol as well; in this case, the ELB would need to operate on TCP/SSL (Layer 4), thus such headers won't be available to NGINX, causing the redirect to loop.