We have a service that uses a GeoIP database to lookup the country of incoming requests. It's currently not receiving the remote IPs.
After injecting an nginx ingress controller, log variables $remote_addr and $the_real_ip are set to 127.0.0.1, due to the incoming connections coming from the linkerd proxy. The header l5d_remote_ip is also not set so cannot be logged using $http_l5d_remote_ip.
If the ingress controller is not injected, $remote_addr and $the_real_ip are set to the correct remote IP
I expect this is due to the nginx ingress controller doing the ssl termination, therefore the linkerd proxy only sees a stream of encrypted TCP.
The ingress controller has:
spec:
externalTrafficPolicy: Local
type: LoadBalancer
All application ingresses are configured with the following annotation:
annotations:
nginx.ingress.kubernetes.io/upstream-vhost: $service_name.$namespace.svc.cluster.local
A basic diagram of the setup and flow:

Inject an nginx ingress controller and configure the logs to contain $remote_addr and $the_real_ip https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/
{
"time": "2019-09-07T09:59:23+00:00",
"time_msec": "1567850363.157",
"request_id": "2a90c1e7b2c8de78581ece7590a8eaaa",
"l5d_remote_ip": "",
"remote_addr": "127.0.0.1",
"the_real_ip": "127.0.0.1",
"remote_user": "",
"request_proto": "HTTP/2.0",
"method": "GET",
"vhost": "api.myapp.example.com",
"path": "/Debug/headers",
"request_query": "",
"request_length": 54,
"request_duration": "0.010",
"upstream_connect_time": "0.000",
"status": "200",
"upstream_status": "200",
"response_body_bytes": "467",
"upstream_name": "ns-myapp-http",
"upstream_ip": "10.33.65.109:80",
"upstream_response_time": "0.008",
"upstream_response_length": "479",
"http_referrer": "",
"http_user_agent": "curl/7.58.0",
"ingress_namespace": "ns",
"ingress_name": "myapp",
"service_name": "myapp",
"service_port": "http"
}
{
"time": "2019-09-07T10:12:05+00:00",
"time_msec": "1567851125.208",
"request_id": "bad9d67eb8517e2bf934845daef6af88",
"l5d_remote_ip": "",
"remote_addr": "1.2.3.4",
"the_real_ip": "1.2.3.4",
"remote_user": "",
"request_proto": "HTTP/2.0",
"method": "GET",
"vhost": "api.myapp.example.com",
"path": "/Debug/headers",
"request_query": "",
"request_length": 54,
"request_duration": "0.005",
"upstream_connect_time": "0.004",
"status": "200",
"upstream_status": "200",
"response_body_bytes": "408",
"upstream_name": "ns-myapp-http",
"upstream_ip": "10.33.64.76:80",
"upstream_response_time": "0.004",
"upstream_response_length": "420",
"http_referrer": "",
"http_user_agent": "curl/7.58.0",
"ingress_namespace": "ns",
"ingress_name": "myapp",
"service_name": "myapp",
"service_port": "http"
}
linkerd check outputkubernetes-api
--------------
β can initialize the client
β can query the Kubernetes API
kubernetes-version
------------------
β is running the minimum Kubernetes API version
β is running the minimum kubectl version
linkerd-config
--------------
β control plane Namespace exists
β control plane ClusterRoles exist
β control plane ClusterRoleBindings exist
β control plane ServiceAccounts exist
β control plane CustomResourceDefinitions exist
β control plane MutatingWebhookConfigurations exist
β control plane ValidatingWebhookConfigurations exist
β control plane PodSecurityPolicies exist
linkerd-existence
-----------------
β 'linkerd-config' config map exists
β control plane replica sets are ready
β no unschedulable pods
β controller pod is running
β can initialize the client
β can query the control plane API
linkerd-api
-----------
β control plane pods are ready
β control plane self-check
β [kubernetes] control plane can talk to Kubernetes
β [prometheus] control plane can talk to Prometheus
β no invalid service profiles
linkerd-version
---------------
β can determine the latest version
β cli is up-to-date
control-plane-version
---------------------
β control plane is up-to-date
β control plane and cli versions match
Status check results are β
Implement PROXY-protocol within the linkerd proxy?
This is tough because the incoming stream is encrypted ... so it isn't possible to just add the x-forwarded-for header. Luckily, nginx supports the PROXY protocol.
Yep, any thoughts on getting proxy protocol into the linkerd proxy?
It shouldn't be hard, just needs some definition and scoping around when it is used and how it is configured.
The only place I've needed to use proxy protocol is on ingress controllers. Having a pod annotation linkerd.io/proxy-protocol: enabled would make most sense from a user's perspective, which I guess the injector can pick up and configure the linkerd-proxy accordingly?
We had a longer conversation about this yesterday. If AKS actually uses the PROXY protocol for its LBs, then we'd pass it on without modification and everything should work. I've not tested yet, or looked into what AKS LBs actually support ... but it might be a good route to go down.
Unfortunately azure load balancers don't support proxy protocol
Hmm, how were you getting the correct IP before then?
The service was running on-prem
What was the LB situation there? Was it adding x-forwarded-for? From the research I've done, any LB using the PROXY protocol will be transparently forwarded and it should work.
Watching this issue - we use CF orange network and have services (not yet meshed) that use x-forwarded-for for GeoIP lookup.
@sdickhoven @dwoldemariam1 FYI
Have same situation with 127.0.0.1 in remote_addr.
But we don't terminate TLS on LB. It is terminated on nginx-ingress.
^ Same.
Watching this issue, I would love to mesh our nginx controllers but we need the ability to preserve the source IP of the clients.
See #3334 for a workaround. The long term solution will be to have the Linkerd proxy support PROXY protocol and x-forwarded-for.
Would like to be able to plan for when we could use linkerd for our applications that use these headers. Any (rough) idea when this work is planned for and/or targeting?
@halcyondude I also had this problem and the workaround as stated in #3334 is to add the following in the Nginx deployment spec:
spec:
template:
metadata:
annotations:
config.linkerd.io/skip-inbound-ports: 80,443 # the workaround
linkerd.io/inject: enabled
Does setting up skip-inbound-ports reduce the mTLS guarantees of linkerd for service-to-service communication?
@paymog in this context, itβs only the nginx pod to upstream pod traffic. See also https://linkerd.io/2/reference/proxy-configuration/
Most helpful comment
@halcyondude I also had this problem and the workaround as stated in #3334 is to add the following in the Nginx deployment spec: