BUG REPORT
If this is a BUG REPORT, please:
NGINX Ingress controller version: 0.15.0
Kubernetes version: 1.10.3
Environment: Baremetal
uname -a): 3.10.0-514.26.2.el7.x86_64What happened:
Basically we have a backend service that provides both http and https protocols in port 80 and 443.
We want to enable ssl passthrough for HTTPS(443) to 443 backend
and Plain HTTP to 80 service in through a single domain
What you expected to happen:
HTTPS Traffic (domain: tls.example.com): ssl passthrough to 443
HTTP Traffic (domain: tls.example.com): send to 80
How to reproduce it (as minimally and precisely as possible):
I've tried to split the ingress object in to two as shown below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
namespace: kubeapps
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: tls.example.com
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 443
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx2
namespace: kubeapps
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: tls.example.com
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
md5-e8fb04930a56ea2443e994154025cdbe
server {
server_name tls.example.com;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
log_by_lua_block {
}
port_in_redirect off;
set $proxy_upstream_name "kubeapps-nginx-443";
set $namespace "kubeapps";
set $ingress_name "nginx";
set $service_name "nginx";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass https://kubeapps-nginx-443;
proxy_redirect off;
}
}
Anything else we need to know:
--enable-ssl-passthrough is added to the argument.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We too have this requirement.
Our current deployment of nginx ingress is at version 0.15.0. Somehow this works with ssl-passthrough for https, as well as being able to handle http. The backing service is configured on both ports 443 & 80. The ingress rule only allows for one backend servicePort to be specified for the given host & path, which we have set to 443 (https). It concerns me that there is no way to specify an additional servicePort with the ability to demarcate between http / https. Nevertheless, this configuration (possibly through luck) works.
Having tried the same ingress configuration on minikube (but with nginx ingress 0.26.1) only https works. Any traffic sent to the http ingress port seems to be redirected to port 443 of the backend service, rather than port 80.
/reopen
@magowant: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
We too have this requirement.
Our current deployment of nginx ingress is at version 0.15.0. Somehow this works with ssl-passthrough for https, as well as being able to handle http. The backing service is configured on both ports 443 & 80. The ingress rule only allows for one backend servicePort to be specified for the given host & path, which we have set to 443 (https). It concerns me that there is no way to specify an additional servicePort with the ability to demarcate between http / https. Nevertheless, this configuration (possibly through luck) works.
Having tried the same ingress configuration on minikube (but with nginx ingress 0.26.1) only https works. Any traffic sent to the http ingress port seems to be redirected to port 443 of the backend service, rather than port 80.
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@onesolpark please can you re-open this issue?
/reopen
@onesolpark: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Please can one of the contributors to this project comment on what the expected behaviour of this should be, and whether it is a bug? Thanks
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@magowant: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.