NGINX Ingress controller version: nginx-ingress-controller:0.14.0
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a): linuxWhat happened:
We are trying to switch from Load balancer for each application service to Ingress rules. By following the documentation for AWS (https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md), the ingress controller, the default backend and the ELB were created. The problem arises when we try to redirect from the Ingress Controller ELB to the application service (the applications have their certificates internally, in a JAVA keystore). Basically, a HTTPS to HTTPS redirect. We tried with different annotations for ingress rules, changing the configuration of nginx and of its ELB.
What you expected to happen:
By calling https:// DNS-Ingress-Controller-ELB / application-name we wanted to be forwarded to the application as if it was called by its own ELB.
How to reproduce it (as minimally and precisely as possible):
Deploy NGINX Ingress Controller with the default YAMLs specified in the documentation (see above)
Example Ingress Rule
```
apiVersion: v1
items:
Ingress Controller Service:
apiVersion: v1
items:
@girbea what's the issue exactly?
@aledbf The problem is that I still get 400 Bad Request The plain HTTP request was sent to HTTPS port
nginx/1.13.12, which, as far as I understood, means that nginx assumes TLS termination and calls the application using HTTP. What I need is for it to act as a "router" between the applications, keeping all the connections on HTTPS.
@girbea, please post the logs from the ingress controller to see the reason. One issue could be the SSL Ciphers nginx is using to establish the connection with your jetty app server. What JDK are you using?
logs-from-nginx-ingress-controller-in-nginx-ingress-controller-74679f96b8-kxcj2.txt
@aledbf Attached you can find the logs which I extracted from the Kubernetes Dashboard. The JDK that the Applications use is: 8u151.
@girbea I have the same issue. Did you ever fix it?
FYI, adding the following annotations fixed it for me:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
@PierrickI3 did you add these annotations to ingress controller service?
@minherz Yes, here is an example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
name: <GIVE IT A NAME HERE>
namespace: kube-system
spec:
rules:
- host: xxxxxxxx.com
http:
paths:
- backend:
serviceName: web
servicePort: 443
path: /
- host: yyyyyyy.com
http:
paths:
- backend:
serviceName: kong-proxy-ssl
servicePort: 8443
path: /
tls:
- hosts:
- xxxxxxxx.com
- yyyyyyy.com
secretName: letsencrypt
@PierrickI3 The example in the annotation parts is an almost a match to one that you posted in the issue description, claiming that it does not work. The only difference is that in the example you added one more annotation kubernetes.io/ingress.class: nginx.
Can you elaborate what was the solution related to the materials you posted in the issue description?
I'm having the same issue... and seem to have the settings recommended here:
helm upgrade myingress stable/nginx-ingress --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"="yes" --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-ssl-cert"="arn:aws:acm:us-east-1:XXXXXXXXXXXX:certificate/2097f0bc-d43d-4525-822d-03b9b6240840" --set controller.publishService.enabled=true --set controller.service.annotations."service\.beta\kubernetes\.io/aws-load-balancer-backend-protocol"="http" --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-ssl-ports"="https"
and deployment:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
rules:
- host: via-ingress.svcs.domain.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
controller logs:
I0926 16:43:08.423411 11 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx", UID:"7acd55bc-c1a8-11e8-b047-0ab3ca8a7912", APIVersion:"extensions/v1beta1", ResourceVersion:"6633295", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/nginx
I0926 16:43:08.423582 11 controller.go:171] Configuration changes detected, backend reload required.
I0926 16:43:08.504400 11 controller.go:187] Backend successfully reloaded.
I0926 16:43:08.506685 11 controller.go:204] Dynamic reconfiguration succeeded.
10.10.42.167 - [10.10.42.167] - - [26/Sep/2018:16:43:17 +0000] "GET / HTTP/1.1" 400 271 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:62.0) Gecko/20100101 Firefox/62.0" 377 0.000 [] - - - - dff5a1c0f1d1c5b2235ca289f56991d4
10.10.42.167 - [10.10.42.167] - - [26/Sep/2018:16:47:05 +0000] "GET / HTTP/1.1" 400 271 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:62.0) Gecko/20100101 Firefox/62.0" 377 0.000 [] - - - - 5fd6b2ea806758fa1af6a3d0d1689638
@cdenneen did you solve the issue ?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@aledbf: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
FYI, adding the following annotations fixed it for me: