Ingress-nginx: Ingress with app-root and force-ssl-redirect is not redirecting to HTTPS

Created on 12 Oct 2018  路  16Comments  路  Source: kubernetes/ingress-nginx

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

NGINX Ingress controller version:
0.19.0

Kubernetes version (use kubectl version):
Server Version: v1.10.5

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • Kernel (e.g. uname -a): 4.4.121-k8s
  • Install tools: kops

What happened:
The ingress controller is set up with TLS termination at the ELB. The Ingress is set with with app-root and force-ssl-redirect. The request is not redirecting to HTTPS correctly.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-ingress
  namespace: monitoring
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/app-root: "/_plugin/kibana"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  rules:
  - host: foo.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: elasticsearch
          servicePort: 80
$ curl -I -k https://foo.example.com/
HTTP/1.1 302 Moved Temporarily
Content-Length: 161
Content-Type: text/html
Date: Fri, 12 Oct 2018 18:45:25 GMT
Location: http://foo.example.com/_plugin/kibana
Server: nginx/1.15.3
Connection: keep-alive

What you expected to happen:

I expect the redirect to target HTTPS.

lifecyclrotten

Most helpful comment

But the documentation of the annotations states "When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource."

It specifically mentions that you terminate on the elb and that you don't need a TLS cert

I'm having the same issue just without app-root and I just can't get the redirect to work

All 16 comments

@chmking The force redirect only works if you specify the TLS part of the ingress, as the following:

spec:
  tls:
  - hosts:
    - foo.bar.com
    secretName: foobar

But this will make HTTP port unnavailable. Maybe you can set something like use the annotation nginx.ingress.kubernetes.io/permanent-redirect: https://foo.example.com/_plugin/kibana in this vhost, but I don't know if this is going to generate a loop :) Or only the ssl-redirect directive.

BTW, this is specified in the template here: {{ if (or $location.Rewrite.ForceSSLRedirect (and (not (empty $server.SSLCert.PemFileName)) $location.Rewrite.SSLRedirect)) }}

But the documentation of the annotations states "When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource."

It specifically mentions that you terminate on the elb and that you don't need a TLS cert

I'm having the same issue just without app-root and I just can't get the redirect to work

Same problem here, anyone could help ?

@rikatz Does this mean I need a cert for the ingress that is separate from the cert (ACM in my case) used for the load balancer if you want to be able to use force-ssl-redirect?

Hi guys, is there any workarround / solution for this issue, I also am affected just wanted to check if there is new information to share. Thanks

@motarski I was able to get mine to work by rolling back to the previous version, here's what I did..

Update:

Someone came up with a better solution in this other thread.

@Hermain did you manage to solve your issue? I've got the same problem.

I am working on this, I will push a PR hopefully in the upcoming few days.

@nzoueidi Any news on this issue ?

This is should be okay while we are using use-forwarded-headers: "true" in the L7 configmap https://github.com/kubernetes/ingress-nginx/blob/master/deploy/provider/aws/patch-configmap-l7.yaml#L11

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

anyone has a solution, same issue here

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Any news on this issue?

Was this page helpful?
0 / 5 - 0 ratings