Linkerd2: Injected nginx ingress controller doesn't have access to the remote client IP

Created on 7 Sep 2019  Β·  17Comments  Β·  Source: linkerd/linkerd2

Bug Report

What is the issue?

We have a service that uses a GeoIP database to lookup the country of incoming requests. It's currently not receiving the remote IPs.

After injecting an nginx ingress controller, log variables $remote_addr and $the_real_ip are set to 127.0.0.1, due to the incoming connections coming from the linkerd proxy. The header l5d_remote_ip is also not set so cannot be logged using $http_l5d_remote_ip.

If the ingress controller is not injected, $remote_addr and $the_real_ip are set to the correct remote IP

I expect this is due to the nginx ingress controller doing the ssl termination, therefore the linkerd proxy only sees a stream of encrypted TCP.

The ingress controller has:

spec:
  externalTrafficPolicy: Local
  type: LoadBalancer

All application ingresses are configured with the following annotation:

  annotations:
    nginx.ingress.kubernetes.io/upstream-vhost: $service_name.$namespace.svc.cluster.local

A basic diagram of the setup and flow:

Linkerd remote ip localhost

How can it be reproduced?

Inject an nginx ingress controller and configure the logs to contain $remote_addr and $the_real_ip https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/

Logs, error output, etc

{
    "time": "2019-09-07T09:59:23+00:00",
    "time_msec": "1567850363.157",
    "request_id": "2a90c1e7b2c8de78581ece7590a8eaaa",
    "l5d_remote_ip": "",
    "remote_addr": "127.0.0.1",
    "the_real_ip": "127.0.0.1",
    "remote_user": "",
    "request_proto": "HTTP/2.0",
    "method": "GET",
    "vhost": "api.myapp.example.com",
    "path": "/Debug/headers",
    "request_query": "",
    "request_length": 54,
    "request_duration": "0.010",
    "upstream_connect_time": "0.000",
    "status": "200",
    "upstream_status": "200",
    "response_body_bytes": "467",
    "upstream_name": "ns-myapp-http",
    "upstream_ip": "10.33.65.109:80",
    "upstream_response_time": "0.008",
    "upstream_response_length": "479",
    "http_referrer": "",
    "http_user_agent": "curl/7.58.0",
    "ingress_namespace": "ns",
    "ingress_name": "myapp",
    "service_name": "myapp",
    "service_port": "http"
}
{
    "time": "2019-09-07T10:12:05+00:00",
    "time_msec": "1567851125.208",
    "request_id": "bad9d67eb8517e2bf934845daef6af88",
    "l5d_remote_ip": "",
    "remote_addr": "1.2.3.4",
    "the_real_ip": "1.2.3.4",
    "remote_user": "",
    "request_proto": "HTTP/2.0",
    "method": "GET",
    "vhost": "api.myapp.example.com",
    "path": "/Debug/headers",
    "request_query": "",
    "request_length": 54,
    "request_duration": "0.005",
    "upstream_connect_time": "0.004",
    "status": "200",
    "upstream_status": "200",
    "response_body_bytes": "408",
    "upstream_name": "ns-myapp-http",
    "upstream_ip": "10.33.64.76:80",
    "upstream_response_time": "0.004",
    "upstream_response_length": "420",
    "http_referrer": "",
    "http_user_agent": "curl/7.58.0",
    "ingress_namespace": "ns",
    "ingress_name": "myapp",
    "service_name": "myapp",
    "service_port": "http"
}

linkerd check output

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ no invalid service profiles

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √

Environment

  • Kubernetes Version: 1.14.6
  • Cluster Environment: AKS
  • Host OS:
  • Linkerd version: stable-2.5.0

Possible solution

Implement PROXY-protocol within the linkerd proxy?

Additional context

areproxy help wanted prioritP1

Most helpful comment

@halcyondude I also had this problem and the workaround as stated in #3334 is to add the following in the Nginx deployment spec:

spec:
  template:
    metadata:
      annotations:
        config.linkerd.io/skip-inbound-ports: 80,443 # the workaround
        linkerd.io/inject: enabled

All 17 comments

This is tough because the incoming stream is encrypted ... so it isn't possible to just add the x-forwarded-for header. Luckily, nginx supports the PROXY protocol.

Yep, any thoughts on getting proxy protocol into the linkerd proxy?

It shouldn't be hard, just needs some definition and scoping around when it is used and how it is configured.

The only place I've needed to use proxy protocol is on ingress controllers. Having a pod annotation linkerd.io/proxy-protocol: enabled would make most sense from a user's perspective, which I guess the injector can pick up and configure the linkerd-proxy accordingly?

We had a longer conversation about this yesterday. If AKS actually uses the PROXY protocol for its LBs, then we'd pass it on without modification and everything should work. I've not tested yet, or looked into what AKS LBs actually support ... but it might be a good route to go down.

Unfortunately azure load balancers don't support proxy protocol

Hmm, how were you getting the correct IP before then?

The service was running on-prem

What was the LB situation there? Was it adding x-forwarded-for? From the research I've done, any LB using the PROXY protocol will be transparently forwarded and it should work.

Watching this issue - we use CF orange network and have services (not yet meshed) that use x-forwarded-for for GeoIP lookup.

@sdickhoven @dwoldemariam1 FYI

Have same situation with 127.0.0.1 in remote_addr.
But we don't terminate TLS on LB. It is terminated on nginx-ingress.

^ Same.

Watching this issue, I would love to mesh our nginx controllers but we need the ability to preserve the source IP of the clients.

See #3334 for a workaround. The long term solution will be to have the Linkerd proxy support PROXY protocol and x-forwarded-for.

Would like to be able to plan for when we could use linkerd for our applications that use these headers. Any (rough) idea when this work is planned for and/or targeting?

@halcyondude I also had this problem and the workaround as stated in #3334 is to add the following in the Nginx deployment spec:

spec:
  template:
    metadata:
      annotations:
        config.linkerd.io/skip-inbound-ports: 80,443 # the workaround
        linkerd.io/inject: enabled

Does setting up skip-inbound-ports reduce the mTLS guarantees of linkerd for service-to-service communication?

@paymog in this context, it’s only the nginx pod to upstream pod traffic. See also https://linkerd.io/2/reference/proxy-configuration/

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ihcsim picture ihcsim  Β·  4Comments

geekmush picture geekmush  Β·  4Comments

ihcsim picture ihcsim  Β·  4Comments

coleca picture coleca  Β·  4Comments

manimaul picture manimaul  Β·  3Comments