Source Client IP is not preserved:
Source Client IP is not preserved causing issue when using nginx controller with annotation nginx.ingress.kubernetes.io/whitelist-source-range
Inject proxy in your nginx-controller, your application and add the annotation in your application.
nginx.ingress.kubernetes.io/whitelist-source-range:
To solved the issue you have to allow 127.0.0.1 it's not what i want cause the whitelist will not work anymore
Logs from nginx-controller:
127.0.0.1 - [127.0.0.1] - - [27/Aug/2019:21:12:27 +0000] "GET /api/tps-reports?resource_type=namespace&all_namespaces=true&window=1m HTTP/2.0" **403** 166 "https://linkerd.domain.com/namespaces/prod/deployments/api-status-prod" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36" 68 0.000 [linkerd-linkerd-web-8084] - - - - d6025324eb3a9d6fc5dbdf9ee1e3cccc
linkerd check output--------------
โ can initialize the client
โ can query the Kubernetes API
kubernetes-version
------------------
โ is running the minimum Kubernetes API version
โ is running the minimum kubectl version
linkerd-config
--------------
โ control plane Namespace exists
โ control plane ClusterRoles exist
โ control plane ClusterRoleBindings exist
โ control plane ServiceAccounts exist
โ control plane CustomResourceDefinitions exist
โ control plane MutatingWebhookConfigurations exist
โ control plane ValidatingWebhookConfigurations exist
โ control plane PodSecurityPolicies exist
linkerd-existence
-----------------
โ 'linkerd-config' config map exists
โ control plane replica sets are ready
โ no unschedulable pods
โ controller pod is running
โ can initialize the client
โ can query the control plane API
linkerd-api
-----------
โ control plane pods are ready
โ control plane self-check
โ [kubernetes] control plane can talk to Kubernetes
โ [prometheus] control plane can talk to Prometheus
โ no invalid service profiles
linkerd-version
---------------
โ can determine the latest version
โ cli is up-to-date
control-plane-version
---------------------
โ control plane is up-to-date
โ control plane and cli versions match
Status check results are โ
linkerd uninject nginx-controller
nginx-controller allow by default X-Forwarded-For.
I tested to allow my AKS subnet and even all my ClusterIP range, doesnt work
externalTrafficPolicy already Local
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.
I am recently having this issue also. Recently (within 24 hours) rolled linkerd into our dev environment and noticed our whitelist on the ingress-rule is now failing. All traffic is seen as coming from the proxy container and is set at 127.0.0.0
$ linkerd version
Client version: stable-2.6.0
Server version: stable-2.6.0
$ linkerd check
kubernetes-api
--------------
โ can initialize the client
โ can query the Kubernetes API
kubernetes-version
------------------
โ is running the minimum Kubernetes API version
โ is running the minimum kubectl version
linkerd-config
--------------
โ control plane Namespace exists
โ control plane ClusterRoles exist
โ control plane ClusterRoleBindings exist
โ control plane ServiceAccounts exist
โ control plane CustomResourceDefinitions exist
โ control plane MutatingWebhookConfigurations exist
โ control plane ValidatingWebhookConfigurations exist
โ control plane PodSecurityPolicies exist
linkerd-existence
-----------------
โ 'linkerd-config' config map exists
โ heartbeat ServiceAccount exist
โ control plane replica sets are ready
โ no unschedulable pods
โ controller pod is running
โ can initialize the client
โ can query the control plane API
linkerd-api
-----------
โ control plane pods are ready
โ control plane self-check
โ [kubernetes] control plane can talk to Kubernetes
โ [prometheus] control plane can talk to Prometheus
โ no invalid service profiles
linkerd-version
---------------
โ can determine the latest version
โ cli is up-to-date
control-plane-version
---------------------
โ control plane is up-to-date
โ control plane and cli versions match
Status check results are โ
Ingress rule where whitelist was previously working:
$ kubectl describe ing my-ingress
Name: my-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
cert-secret terminates latest.myapp.com
Rules:
Host Path Backends
---- ---- --------
latest.myapp.com
/my-path svc-light:http (<none>)
dark.latest.myapp.com
/my-path svc-dark:http (<none>)
Annotations:
kubernetes.io/ingress.class: nginx-internal
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/8
Events: <none>
Log showing localhost IP only:
127.0.0.1 - [127.0.0.1] - - [11/Dec/2019:17:56:25 +0000] "GET /my-path/application/healthcheck HTTP/2.0" 403 153 "-" "curl/7.54.0" 71 0.000 [default-svc-light-http] - - - - 77a3fa6d80d96c6c850ef2747516e56d
127.0.0.1 - [127.0.0.1] - - [11/Dec/2019:17:56:27 +0000] "GET /my-path/application/healthcheck HTTP/2.0" 403 153 "-" "curl/7.54.0" 71 0.000 [default-svc-light-http] - - - - 9f86e096417eaaf69bd1f8586de48dc9
127.0.0.1 - [127.0.0.1] - - [11/Dec/2019:17:56:28 +0000] "GET /my-path/application/healthcheck HTTP/2.0" 403 153 "-" "curl/7.54.0" 71 0.000 [default-svc-light-http] - - - - bcd6132ce3fd9d5a8bbd520f4207c02b
@theharleyquin i tested recently with the edge version and it worked, can you test ?
This is super dependent on your provider. If they support the PROXY protocol, it'll work. If they add x-forwarded-for headers, it'll work. If they don't do either of those, it won't work. There's definitely more work that we can do in Linkerd to make it a little bit better.
From previous issues related to x_forward and PROXY - we are on Azure/AKS and don't know if the PROXY is enabled. Will try the edge release to see if a difference is made.
AKS doesn't do either unfortunately =/ They're doing DSR instead.
After doing some brainstorming with @wmorgan, a great workaround for this today is to just skip inbound ports (linkerd inject --skip-inbound-ports 80,443). Assuming HTTPS traffic, you'll miss out on incoming TCP bytes but that's about it.
@grampelberg this has done the trick for me!
kubectl get deploy -o yaml -n ingress-controllers | linkerd inject --skip-inbound-ports 80,443 - | kubectl apply -f -
or
metadata:
annotations:
config.linkerd.io/skip-inbound-ports: 80,443
linkerd.io/inject: enabled
This kept the source IP in the nginx logs as well as allowing the white-list to stay active but also keeping mTLS on traffic from nginx to app container
I tested and it worked on my side too @grampelberg thank you
Great work Around !
Most helpful comment
AKS doesn't do either unfortunately =/ They're doing DSR instead.
After doing some brainstorming with @wmorgan, a great workaround for this today is to just skip inbound ports (
linkerd inject --skip-inbound-ports 80,443). Assuming HTTPS traffic, you'll miss out on incoming TCP bytes but that's about it.