Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.): yes
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): whitelist
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher3", GitCommit:"772c4c54e1f4ae7fc6f63a8e1ecd9fe616268e16", GitTreeState:"clean", BuildDate:"2017-11-27T19:51:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
We are running Rancher v1.6.12 locally with 3 virtual machine nodes.
Here is the configuration of the nodes:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
uname -a):3.10.0-693.5.2.el7.x86_64
- Install tools:What happened:
I added a whitelist on our Ingress resource using the following YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testing
namespace: testing
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/8"
spec:
rules:
- host: testing.com
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
I tried curl on the page and I was still able to access it.
some.ip.here - - [16/Feb/2018:01:37:45 +0000] "GET / HTTP/1.1" 200 58 "-" "curl/7.53.1" "my.ip.is.here"
What you expected to happen:
I should not be able to access it since I'm on a different IP.
How to reproduce it (as minimally and precisely as possible):
curl http://test.com
Anything else we need to know:
@mvineza please use the issue template to provide context.
Check the logs to make sure you see the real source IP address of the clients
@aledbf Looks like if --ssl-passthrough is enabled, the nginx controller uses proxy protocol for HTTPS. use-proxy-protocol must be enabled for nginx to unwrap the IP for use in the whitelist. When proxy protocol is enabled, it is enabled for 80 and 443. With --ssl-passthrough enabled, the whitelist does not work unless use-proxy-protocol: "true" is set. The problem for us, is that our load balancer does not support proxy protocol, so port 80 requests fail with curl: (52) Empty reply from server. @mvineza, can you confirm if --ssl-passthrough is enabled?
Access log: 127.0.0.1 - [127.0.0.1] - - [16/Feb/2018:00:16:32 +0000] "GET / HTTP/1.1" 403 169 "-" "curl/7.58.0" 91 0.000
FYI... if --ssl-passthrough is enabled, the nginx controller handles sending HTTPS traffic to nginx over 442, whereas HTTP traffic is handled by nginx directly.
FYI... if --ssl-passthrough is enabled, the nginx controller handles sending HTTPS traffic to the pods, whereas HTTP traffic is handled by nginx.
Correct, we just pipe the TCP connection to the backend.
Do you think we can expose an option to not enable proxy protocol for HTTP? @aledbf
Basically our load balancer does not use proxy protocol and its only --ssl-passthrough that is requiring proxy protocol by nginx. HTTPS is fine. HTTP fails.
Or we need to have the controller handle port 80, then forward to maybe 81 and wrap it with proxy protocol if --ssl-passthrough is enabled.
@aledbf Done updating issue using the template. I confirm that the IP that is being shown on the nginx logs is the IP where I am connecting from which is from my laptop.
@azweb76 It is not enabled. Here is the "deploy/nginx-ingress-controller" args
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
This is also a problem for me.
I created the nginx ingress using helm and have a simple ingress like this
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/whitelist-source-range: "xxx.xx.xxx.x/xx"
spec:
rules:
- host: grafana.example.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
The ingress itself works ... all good, but there is no whitelisting kicking in ... I was expecting the nginx-controller pod to be reloaded with a deny config but theres nothing there... how does that work?
@fripoli Unless you are starting the controller with the flag _--annotations-prefix=ingress.kubernetes.io_, please change the whitelist annotation to: _nginx.ingress.kubernetes.io/whitelist-source-range_
thanks, that was the issue :)
Not working in 0.15.0
I'm using quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
I'm always getting a 403 when I add nginx.ingress.kubernetes.io/whitelist-source-range : "x.x.x.x" where x.x.x.x is the Ip I get from https://whatismyipaddress.com.
/assign @antoineco
If I use a configmap like this one it works:
apiVersion: v1
data:
enable-vts-status: "false"
whitelist-source-range : "1.2.3.4"
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-0.13.2
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
I'd confirm this, in my case was my mistake.
I didn't add the nginx prefix.
@grebois could you confirm this is happening with the latest version when using the correct annotation prefix? (nginx.ingress.kubernetes.io/)
Also please provide more information about your environment, and make sure the NGINX access logs do display the expected external IP.
@YvonneArnoldus that's most likely because NGINX interprets the incoming traffic as coming from a load balancer IP instead of your own IP, same comment as above.
related: #2567
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
@antoineco I wasn't able to test if it works or not anymore, now I only get the IP of the load balancer so that's a blocker, but will follow up on this as soon as possible, currently using 0.19.0.
@grebois : what ip do you see in ingress logs (public, or private) ?
if private, this is probably IP of LB node. If you installed it with helm, try to upgrade ingress with helm upgrade --name stable/nginx-ingress --set controller.service.externalTrafficPolicy=Local
Having same issue.
In logs - private address (behind NAT)
192.168.0.35 - [192.168.0.35] - - [09/Oct/2018:10:04:32 +0000] "GET / HTTP/1.1" 403 177 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/69.0.3497.81 Chrome/69.0.3497.81 Safari/537.36" 729 0.000 [monitoring-prometheus-k8s-web] - - - - 9a59fbcb47e8f4092e709fa60333503d
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
ThxGod, this hlp a lot
Most helpful comment
@fripoli Unless you are starting the controller with the flag _--annotations-prefix=ingress.kubernetes.io_, please change the whitelist annotation to: _nginx.ingress.kubernetes.io/whitelist-source-range_