Istio: unable to preserve source ip

Created on 17 May 2018  Â·  58Comments  Â·  Source: istio/istio

Tried the ISTIO_INBOUND_INTERCEPTION_MODE: TPROXY env var

and Annotations: sidecar.istio.io/interceptionMode=TPROXY

and made sure proxy runs as root.... however, still see 127.0.0.1 as the source ip.

$ kubectl exec -it echoserver-fd4ff9bc9-zfxwh -c istio-proxy sh
# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 19:29 ?        00:00:00 /pause
root        55     0  0 19:29 ?        00:00:00 nginx: master process nginx -g daemon off;
nobody      60    55  0 19:29 ?        00:00:00 nginx: worker process
root        61     0  0 19:29 ?        00:00:00 /usr/local/bin/pilot-agent proxy sidecar --configPath /etc/istio/proxy --binaryPath /usr/local/bin/envoy --serviceCluster istio-proxy --drainDuration 45s 
root        75    61  0 19:29 ?        00:00:00 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-proxy --s
root        85     0  0 19:30 pts/0    00:00:00 sh
root        89    85  0 19:30 pts/0    00:00:00 ps -ef
# exit

~/Downloads/istio-release-0.8-20180515-17-26/install/kubernetes ⌚ 15:29:52
$ curl 169.60.83.12:80/                                        
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://169.60.83.12:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
cache-control=max-stale=0
content-length=0
host=169.60.83.12
user-agent=curl/7.54.0
x-b3-sampled=1
x-b3-spanid=b1398c24c1785342
x-b3-traceid=b1398c24c1785342
x-bluecoat-via=ccc09ce496fc2951
x-envoy-decorator-operation=guestbook
x-envoy-expected-rq-timeout-ms=15000
x-envoy-external-address=129.42.208.183
x-forwarded-for=9.27.120.57, 129.42.208.183
x-forwarded-proto=http
x-request-id=fbfaff74-7a05-91fb-9731-cd436a480956
BODY:
-no body in request-%                                                                                                                                                                                     

~/Downloads/istio-release-0.8-20180515-17-26/install/kubernetes ⌚ 15:30:10
$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
echoserver-fd4ff9bc9-zfxwh               2/2       Running   0          56s
guestbook-service-64f4fc5fbc-rd55j       2/2       Running   0          10d
guestbook-ui-7b48846f9-fgtt6             2/2       Running   0          10d
helloworld-service-v1-f4f4dfd56-cqr7z    2/2       Running   0          10d
helloworld-service-v2-78b9497478-cz64x   2/2       Running   0          10d
mysql-7b877b4cf4-z2nrl                   2/2       Running   0          15d
redis-848b98bc8b-h878m                   2/2       Running   0          15d

~/Downloads/istio-release-0.8-20180515-17-26/install/kubernetes ⌚ 15:30:13
$ kubectl describe pod echoserver-fd4ff9bc9-zfxwh 
Name:           echoserver-fd4ff9bc9-zfxwh
Namespace:      default
Node:           10.188.52.41/10.188.52.41
Start Time:     Thu, 17 May 2018 15:29:17 -0400
Labels:         pod-template-hash=980995675
                run=echoserver
Annotations:    sidecar.istio.io/interceptionMode=TPROXY
                sidecar.istio.io/status={"version":"c883147438ec6b276f8303e997b74ece3067ebb275c09015f195492aab8f445a","initContainers":["istio-init","enable-core-dump"],"containers":["istio-proxy"],"volumes":["istio-...
Status:         Running
IP:             172.30.53.20
Controlled By:  ReplicaSet/echoserver-fd4ff9bc9
Init Containers:
  istio-init:
    Container ID:  docker://fd6f1965f5da1e3b36ff4524a996a9731492351da812793a996e3f1f8246fd50
    Image:         gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26
    Image ID:      docker-pullable://gcr.io/istio-release/proxy_init@sha256:a591ef52693e48885a1d47ee9a3f85c1fc2cf639bfb09c5b295b443e964d7f5e
    Port:          <none>
    Args:
      -p
      15001
      -i
      *
      -x

      -b
      8080,
      -d

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 May 2018 15:29:23 -0400
      Finished:     Thu, 17 May 2018 15:29:25 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      ISTIO_META_INTERCEPTION_MODE:     TPROXY
      ISTIO_INBOUND_INTERCEPTION_MODE:  TPROXY
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
  enable-core-dump:
    Container ID:  docker://a5f756c96cb1f97a4b5e8f5651163e2119e035af0b3c8247159f6cdb7e524f6d
    Image:         gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26
    Image ID:      docker-pullable://gcr.io/istio-release/proxy_init@sha256:a591ef52693e48885a1d47ee9a3f85c1fc2cf639bfb09c5b295b443e964d7f5e
    Port:          <none>
    Command:
      /bin/sh
    Args:
      -c
      sysctl -w kernel.core_pattern=/etc/istio/proxy/core.%e.%p.%t && ulimit -c unlimited
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 May 2018 15:29:26 -0400
      Finished:     Thu, 17 May 2018 15:29:27 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
Containers:
  echoserver:
    Container ID:   docker://3164cf612999fb5a7feb1a677551fc1578dc7cda00cac26143f3bfb2b7dc8365
    Image:          gcr.io/google_containers/echoserver:1.4
    Image ID:       docker-pullable://gcr.io/google_containers/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
    Port:           8080/TCP
    State:          Running
      Started:      Thu, 17 May 2018 15:29:28 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
  istio-proxy:
    Container ID:  docker://47d7140c535af93cccbe6105338331028ca823abdaebbb6e3d31ae5bec87b606
    Image:         gcr.io/istio-release/proxyv2:release-0.8-20180515-17-26
    Image ID:      docker-pullable://gcr.io/istio-release/proxyv2@sha256:5f0836dfc280e0536d875a541e68ed512af73f62017c3c74f0f4981002ef601d
    Port:          <none>
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      istio-proxy
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15007
      --discoveryRefreshDelay
      10s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --statsdUdpAddress
      istio-statsd-prom-bridge.istio-system:9125
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      NONE
    State:          Running
      Started:      Thu, 17 May 2018 15:29:29 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAME:                         echoserver-fd4ff9bc9-zfxwh (v1:metadata.name)
      POD_NAMESPACE:                    default (v1:metadata.namespace)
      INSTANCE_IP:                       (v1:status.podIP)
      ISTIO_META_POD_NAME:              echoserver-fd4ff9bc9-zfxwh (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:     TPROXY
      ISTIO_INBOUND_INTERCEPTION_MODE:  TPROXY
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  istio-envoy:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  Memory
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.default
    Optional:    true
  default-token-lg1j3:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lg1j3
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                   Message
  ----    ------                 ----  ----                   -------
  Normal  Scheduled              1m    default-scheduler      Successfully assigned echoserver-fd4ff9bc9-zfxwh to 10.188.52.41
  Normal  SuccessfulMountVolume  1m    kubelet, 10.188.52.41  MountVolume.SetUp succeeded for volume "istio-envoy"
  Normal  SuccessfulMountVolume  1m    kubelet, 10.188.52.41  MountVolume.SetUp succeeded for volume "istio-certs"
  Normal  SuccessfulMountVolume  1m    kubelet, 10.188.52.41  MountVolume.SetUp succeeded for volume "default-token-lg1j3"
  Normal  Pulled                 1m    kubelet, 10.188.52.41  Container image "gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26" already present on machine
  Normal  Created                1m    kubelet, 10.188.52.41  Created container
  Normal  Started                59s   kubelet, 10.188.52.41  Started container
  Normal  Started                56s   kubelet, 10.188.52.41  Started container
  Normal  Pulled                 56s   kubelet, 10.188.52.41  Container image "gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26" already present on machine
  Normal  Created                56s   kubelet, 10.188.52.41  Created container
  Normal  Pulled                 54s   kubelet, 10.188.52.41  Container image "gcr.io/google_containers/echoserver:1.4" already present on machine
  Normal  Created                54s   kubelet, 10.188.52.41  Created container
  Normal  Started                54s   kubelet, 10.188.52.41  Started container
  Normal  Pulled                 54s   kubelet, 10.188.52.41  Container image "gcr.io/istio-release/proxyv2:release-0.8-20180515-17-26" already present on machine
  Normal  Created                54s   kubelet, 10.188.52.41  Created container
  Normal  Started                53s   kubelet, 10.188.52.41  Started container

arenetworking kinenhancement lifecyclstaleproof

Most helpful comment

I had a similar issue to get real client ip address within the mesh (not using ingress gateway). I have a scenario where podA talks to podB and podB wants to know the real ip address of podB. With istio, the communication looks like podA -> istio-proxy -> podB-k8s-service -> istio-proxy -> podB. Since istio-proxy terminates mtls connection and establishes a new connection to podB, podB sees client address as 127.0.0.1 as reported in the original description. Since istio also sanitizes the headers, X-Forwarded-For header is also removed. To workaround this scenario, I was able to use Envoy's variable substitution (thanks @snowp from envoy slack channel for the tip) and add custom header to route with real client ip address as below

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: test-virtual-service
spec:
  gateways:
    - mesh
  hosts:
    - your.hostname.com
  http:
  - route:
    - destination:
        host: your.hostname.com
    headers:
      request:
        add:
          X-Real-Client-Ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%"

All 58 comments

@costinm @ldemailly any suggestions for things I missed?

Probably something worth documenting for release note as a limitation as this is a pretty common usage

maybe something missing from #4654

cc @rlenglet

istio-init should have a -m TPROXY arg if injected with that mode.

In fact, your istio-init args don't contain any -m option. That's odd, since the templates unconditionally set a -m arg:
https://github.com/istio/istio/blob/master/pilot/pkg/kube/inject/mesh.go#L29
https://github.com/istio/istio/blob/master/install/kubernetes/helm/istio/charts/sidecar-injector/templates/configmap.yaml#L24
@linsun are you using automatic injection or istioctl?
I would suspect that the injector's template is broken.

Without the -m TPROXY option, redirection will be done using iptables REDIRECT and that alone explains why your sidecar is still working, but you're losing the source IP address.

Oh I see that you set the ISTIO_INBOUND_INTERCEPTION_MODE env variable, which should have the same effect.
@linsun can you please paste the logs from your istio-init container? That should give every command that was run to setup iptables.

I think I removed -m flag as it was pointing to REDIRECT... i can change that to -m TPROXY but you seem to indicate that is not needed.

$ kubectl logs echoserver-fd4ff9bc9-zfxwh -c istio-init
id: 'istio-proxy': no such user
Environment:
------------
ENVOY_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=TPROXY
ISTIO_INBOUND_TPROXY_MARK=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=

Variables:
----------
PROXY_PORT=15001
PROXY_UID=1337,0
INBOUND_INTERCEPTION_MODE=TPROXY
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=8080,
INBOUND_PORTS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=

+ iptables -t nat -N ISTIO_REDIRECT
+ iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port 15001
+ '[' -n 8080, ']'
+ '[' TPROXY = TPROXY ']'
+ iptables -t mangle -N ISTIO_DIVERT
+ iptables -t mangle -A ISTIO_DIVERT -j MARK --set-mark 1337
+ iptables -t mangle -A ISTIO_DIVERT -j ACCEPT
+ ip -f inet rule add fwmark 1337 lookup 133
+ ip -f inet route add local default dev lo table 133
+ iptables -t mangle -N ISTIO_TPROXY
+ iptables -t mangle -A ISTIO_TPROXY '!' -d 127.0.0.1/32 -p tcp -j TPROXY --tproxy-mark 1337/0xffffffff --on-port 15001
+ table=mangle
+ iptables -t mangle -N ISTIO_INBOUND
+ iptables -t mangle -A PREROUTING -p tcp -j ISTIO_INBOUND
+ '[' 8080, == '*' ']'
+ for port in '${INBOUND_PORTS_INCLUDE}'
+ '[' TPROXY = TPROXY ']'
+ iptables -t mangle -A ISTIO_INBOUND -p tcp --dport 8080 -m socket -j ISTIO_DIVERT
+ iptables -t mangle -A ISTIO_INBOUND -p tcp --dport 8080 -j ISTIO_TPROXY
+ iptables -t nat -N ISTIO_OUTPUT
+ iptables -t nat -A OUTPUT -p tcp -j ISTIO_OUTPUT
+ iptables -t nat -A ISTIO_OUTPUT -o lo '!' -d 127.0.0.1/32 -j ISTIO_REDIRECT
+ for uid in '${PROXY_UID}'
+ iptables -t nat -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
+ iptables -t nat -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
+ for uid in '${PROXY_UID}'
+ iptables -t nat -A ISTIO_OUTPUT -m owner --uid-owner 0 -j RETURN
+ iptables -t nat -A ISTIO_OUTPUT -m owner --gid-owner 0 -j RETURN
+ iptables -t nat -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
+ '[' -n '*' ']'
+ '[' '*' == '*' ']'
+ '[' -n '' ']'
+ iptables -t nat -A ISTIO_OUTPUT -j ISTIO_REDIRECT
+ set +o nounset
+ '[' -z '' ']'
+ ip6tables -F INPUT
+ ip6tables -A INPUT -m state --state ESTABLISHED -j ACCEPT
+ ip6tables -A INPUT -j REJECT

@rlenglet ^^

I think romain did say clearly that you do need -m TPROXY

OK, it did have the same effect as -m TPROXY. It is indeed setting up redirection using iptables TPROXY:

iptables -t mangle -A ISTIO_TPROXY '!' -d 127.0.0.1/32 -p tcp -j TPROXY --tproxy-mark 1337/0xffffffff --on-port 15001

I don't see a priori anything wrong then.
@linsun are you seeing 127.0.0.1 as source IP for inbound connections, or for outbound connections, or both?

i used the echo service, you could try it too, attached below is the deployment yaml. I'm seeing 127.0.0.1 as source IP for inbound. Not sure if this service prints out outbound.

$ cat echoserver.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: echoserver
  name: echoserver
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: echoserver
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: echoserver
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.4
        imagePullPolicy: IfNotPresent
        name: echoserver
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

and this is my injected file:

$ cat echoserver-injected.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    run: echoserver
  name: echoserver
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: echoserver
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/status: '{"version":"c883147438ec6b276f8303e997b74ece3067ebb275c09015f195492aab8f445a","initContainers":["istio-init","enable-core-dump"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
        sidecar.istio.io/interceptionMode: 'TPROXY'
      creationTimestamp: null
      labels:
        run: echoserver
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.4
        imagePullPolicy: IfNotPresent
        name: echoserver
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - args:
        - proxy
        - sidecar
        - --configPath
        - /etc/istio/proxy
        - --binaryPath
        - /usr/local/bin/envoy
        - --serviceCluster
        - istio-proxy
        - --drainDuration
        - 45s
        - --parentShutdownDuration
        - 1m0s
        - --discoveryAddress
        - istio-pilot.istio-system:15007
        - --discoveryRefreshDelay
        - 10s
        - --zipkinAddress
        - zipkin.istio-system:9411
        - --connectTimeout
        - 10s
        - --statsdUdpAddress
        - istio-statsd-prom-bridge.istio-system:9125
        - --proxyAdminPort
        - "15000"
        - --controlPlaneAuthPolicy
        - NONE
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: ISTIO_META_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: ISTIO_META_INTERCEPTION_MODE
          value: TPROXY
        - name: ISTIO_INBOUND_INTERCEPTION_MODE
          value: TPROXY
        image: gcr.io/istio-release/proxyv2:release-0.8-20180515-17-26
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        resources: {}
        securityContext:
          privileged: false
          readOnlyRootFilesystem: true
          # runAsUser: 1337
        volumeMounts:
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/certs/
          name: istio-certs
          readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
      - args:
        - -p
        - "15001"
        # - -u
        # - "1337"
        # - -m
        # - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - 8080,
        - -d
        - ""
        env:
        - name: ISTIO_META_INTERCEPTION_MODE
          value: TPROXY
        - name: ISTIO_INBOUND_INTERCEPTION_MODE
          value: TPROXY
        image: gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26
        imagePullPolicy: IfNotPresent
        name: istio-init
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
      - args:
        - -c
        - sysctl -w kernel.core_pattern=/etc/istio/proxy/core.%e.%p.%t && ulimit -c
          unlimited
        command:
        - /bin/sh
        image: gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26
        imagePullPolicy: IfNotPresent
        name: enable-core-dump
        resources: {}
        securityContext:
          privileged: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - name: istio-certs
        secret:
          optional: true
          secretName: istio.default
status: {}

(random plug: that echoserver is a 44Mb image, you can use fortio's 3Mbytes image and hit /debug and get the incoming IP)

@linsun Are you saying that the app itself sees 127.0.0.1 as the source IP? That's completely normal, since all inbound connections it sees come from the sidecar Envoy proxy. The app will always see 127.0.0.1 for inbound connections.
The problem that is solved by TPROXY is the source IP that Envoy sees for inbound connections.
So that Mixer reports, etc. contain the correct source IP for inbound connections.

Can you confirm that it is the app itself that sees 127.0.0.1?

Fixing the source IP that apps see for inbound connections is another problem.

There might enough building blocks in Envoy to bind the upstream connections to the original downstream source IP. See for instance the work done to support the FREEBIND socket option, which allows binding to arbitrary IPs per cluster. See for example https://github.com/envoyproxy/envoy/tree/master/configs/freebind

Actually we will need more from Envoy. We'd need an "original-src-cluster", similar to the existing original-dst-cluster, which would bind upstream connections to the original source IP addresses using FREEBIND.

@rlenglet yes, the app sees 127.0.0.1 but we would like to see the actual source ip of the requester (which is my laptop)

BTW, I got 127.0.0.1 as the client addr before I use TPROXY. I had thought switching to TPROXY will allow the app to see the client addr of the requester.

@rlenglet I am confused in the above PR description you wrote "Contrary to REDIRECT, TPROXY doesn't perform NAT, and therefore preserves both source and destination IP addresses and ports of inbound connections. One benefit is that the source.ip attributes reported by Mixer for inbound connections will always be correct, unlike when using REDIRECT." - what is tproxy mode doing if it doesn't actually preserve the ip ?

or is it somehow only available to mixer filter and then lost ?

or is it somehow only available to mixer filter and then lost ?

Correct, that’s the current status.

There are 2 parts to this problem:
(1) Envoy / Mixer / ... needs to see the real original source IP address.
(2) Envoy needs to bind on the original source IP address for upstream connections, so the app see it.

TPROXY solves (1).
(2) requires (1).
(2) is not yet done, and will require a new feature from Envoy (“original-src-cluster”).

@rlenglet are you saying original-src-cluster isn't avail in Envoy yet? And after it is avail in Envoy, we need to make it available in Istio pilot so that user can indicate whether they want source ip to be preserved?

cc @ijsnellf @cmluciano

@rlenglet are you saying original-src-cluster isn't avail in Envoy yet? And after it is avail in Envoy, we need to make it available in Istio pilot so that user can indicate whether they want source ip to be preserved?

Correct. But we should be able to always enable that feature in Istio sidecars. The SO_FREEBIND option doesn't require any capabilities.

We had to make TPROXY optional because it requires CAP_NET_ADMIN. That won't be the case for original-src-cluster.

(2) is not yet done, and will require a new feature from Envoy (“original-src-cluster”).

@rlenglet Can you expand on the missing features upstream that you mention here ?

From the Envoy docs, I thought this was already supported in original destination filter

@cmluciano original-dst-filter is not relevant to this issue. What it does is restoring the destination IP address of downstream connections. This can then be combined with original-dst-cluster to set the destination IP address of upstream connections to the original destination IP address of the downstream connection.

But neither original-dst-filter nor original-dst-cluster deals with source IP addresses. Currently, whenever Envoy opens an upstream connection, it doesn't bind to any specific IP address. Therefore, in the Istio sidecar case, the TCP/IP stack may implicitly bind upstream connections to one of 2 IP addresses:

  1. In the case of an inbound connection, the upstream connection will have 127.0.0.1 as the destination IP address, so Linux chooses to bind the connection to 127.0.0.1, so the source IP address of the upstream connection will have 127.0.0.1 as the source IP address.
  2. In the case of an outbound connection, the upstream connection is routed outside of the pod, so Linux chooses to bind the connection to the pod's IP address, so the source IP address of the upstream connection will have the pod's IP address as the source IP address.

Case 2 (outbound connection) has already the right source IP address.

The problem we want to address is case 1. To solve that problem, we need Envoy to bind each upstream connections to the source IP address of the corresponding downstream connection. There is nothing allowing that currently in Envoy.

I nicknamed that feature "original-src-cluster", because it is symmetric to the existing "original-dst-cluster" feature.

We also need to be careful to use that original-src-cluster only in case 1 (inbound clusters), and not in case 2 (outbound clusters).

For http, you need to use
x-envoy-external-address=129.42.208.183
x-forwarded-for=9.27.120.57, 129.42.208.183

it works for both redirect and tproxy.

Since requests go trough ingress - the app will not really get the client address in the TCP connection.

With both tproxy and redirect, envoy can see the immediate client (ingress or real client) - but it terminates the connection and creates a separate TCP connection to the app.

There is a special header for 'plain TCP' connections - I don't know if we support it in istio APIs ( same as haproxy )

Filed https://github.com/envoyproxy/envoy/issues/3481 for the original-src-cluster feature.

Thanks @rlenglet let's follow up on there for now.

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 2 weeks unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

This issue has been automatically closed because it has not had activity in the last month and a half. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

not stale

I noticed envoyproxy/envoy#3481 is closed... is this scenario supported by istio now, e.g. what is missing? @cmluciano

@linsun This likely is still missing on the Istio side but is supported on the Envoy side. The parameter is [source_address(]https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/core/address.proto#envoy-api-msg-core-bindconfig).

This should be set in the upstream_bind_config from CDS.

@linsun @rlenglet I also encountered this problem, modify the InterceptionMode: TPROXY has no effect, is there a solution now?

@linsun @rlenglet I also encountered this problem, modify the InterceptionMode: TPROXY has no effect, is there a solution now?

I don't think there has been any effort on the Istio side re: this issue.

@linsun Are you saying that the app itself sees 127.0.0.1 as the source IP? That's completely normal, since all inbound connections it sees come from the sidecar Envoy proxy. The app will always see 127.0.0.1 for inbound connections.
The problem that is solved by TPROXY is the source IP that Envoy sees for inbound connections.
So that Mixer reports, etc. contain the correct source IP for inbound connections.

Can you confirm that it is the app itself that sees 127.0.0.1?

@rlenglet I'm confused, I thought REDIRECT only changes the destination IP and doesn't touch the source IP. However, I'm not very familiar with IPtables. Is there anything I missed here?

@zhaohuabing REDIRECT may translate the source IP.

I had a similar issue to get real client ip address within the mesh (not using ingress gateway). I have a scenario where podA talks to podB and podB wants to know the real ip address of podB. With istio, the communication looks like podA -> istio-proxy -> podB-k8s-service -> istio-proxy -> podB. Since istio-proxy terminates mtls connection and establishes a new connection to podB, podB sees client address as 127.0.0.1 as reported in the original description. Since istio also sanitizes the headers, X-Forwarded-For header is also removed. To workaround this scenario, I was able to use Envoy's variable substitution (thanks @snowp from envoy slack channel for the tip) and add custom header to route with real client ip address as below

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: test-virtual-service
spec:
  gateways:
    - mesh
  hosts:
    - your.hostname.com
  http:
  - route:
    - destination:
        host: your.hostname.com
    headers:
      request:
        add:
          X-Real-Client-Ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%"

Do we have any update on this?
I'm using Istio 1.4.0 with the interceptionMode as TPROXY (and even with the externalTrafficPolicy: Local for the istio-ingressgateway), still my app container in the pod sees the source IP as 127.0.0.1

I'm interested in preserving the source IP address for the TCP connection (not passing it as HTTP header)

Is there any workaround (like with bootstrapOverride and configuring proxy) to get this working? If yes, please let me know on how to achieve it.

I got the TCP client source IP in the pod container with the following steps:

Edit “istio” configmap and add “interceptionMode: TPROXY” under defaultConfig.
Add “sidecar.istio.io/interceptionMode: TPROXY” to pod spec
Delete ISTIO_TPROXY rule on the sidecar proxy
-A ISTIO_TPROXY ! -d 127.0.0.1/32 -p tcp -j TPROXY --on-port 15001 --on-ip 0.0.0.0 --tproxy-mark 0x539/0xffffffff
Add PREROUTING rule on for the application service (to avoid the Kubernetes nat)

If there is a better/elegant way to achieve this please let me know.

Delete ISTIO_TPROXY rule on the sidecar proxy

@raj-nag This disables redirection to the proxy. So basically, you're disabling Istio for this pod.
If this is really what you're trying to achieve, you can disable redirection for the whole pod or only for some ports.
But this is not what this issue is about.

This issue is about a different requirement: redirect inbound traffic to Envoy, and have Envoy use the original downstream IP address as the source address in the upstream connection.

@rlenglet I think you are right. Thanks a lot. I guess, I was excited about seeing the source IP in the pod container and didn't notice that it bypassed Istio.

Even I need the fix for the original issue and in addition I need a way to match the source IP (or subnet) in the policy rule to redirect the traffic to a specific envoy.

Could you please confirm on whether this can/can't be achieved with the Istio 1.4 (so that I can stop trying).

Could you please confirm on whether this can/can't be achieved with the Istio 1.4 (so that I can stop trying).

No, Istio doesn't support this in yet.

Is there any plan for support this feature in Istio or any workaround for this now ?

No, there is currently no plan nor any workaround.

I tried EnvoyFilter with Istio 1.5:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: nginx-original-src
  namespace: default
spec:
  workloadSelector:
    labels:
      app: nginx
  configPatches:
  - applyTo: LISTENER
    match:
      context: SIDECAR_INBOUND
      listener:
        portNumber: 80
    patch:
      operation: MERGE
      value:
        listenerFilters:
        - name: envoy.listener.original_src
          config:
            mark: 133

generated listener config seemed ok:

        "listenerFilters": [
            {
                "name": "envoy.listener.tls_inspector"
            },
            {
                "name": "envoy.listener.original_src",
                "config": {
                    "mark": 133
                }
            }
        ],
        "listenerFiltersTimeout": "0.100s",
        "continueOnListenerFiltersTimeout": true,
        "trafficDirection": "INBOUND"
    }
]

according to envoy documentation , it should work in sidecar deployment after applying following commands:

iptables  -t mangle -I PREROUTING -m mark     --mark 123 -j CONNMARK --save-mark
iptables  -t mangle -I OUTPUT     -m connmark --mark 123 -j CONNMARK --restore-mark
ip6tables -t mangle -I PREROUTING -m mark     --mark 123 -j CONNMARK --save-mark
ip6tables -t mangle -I OUTPUT     -m connmark --mark 123 -j CONNMARK --restore-mark
ip rule add fwmark 123 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
ip -6 rule add fwmark 123 lookup 100
ip -6 route add local ::/0 dev lo table 100
echo 1 > /proc/sys/net/ipv4/conf/eth0/route_localnet

but when curl the workload, it gives error 'upstream connect error or disconnect/reset before headers. reset reason: connection failure'

and in the istio-proxy logs:

[2020-04-17T13:18:39.407Z] "GET / HTTP/1.1" 503 UF "-" "-" 0 91 1002 - "-" "curl/7.67.0" "e49c1a6c-1c61-9190-88fc-55e29e58f375" "nginx" "127.0.0.1:80" inbound|80|http|nginx.default.svc.k8s.gmem.cc - 172.27.155.121:80 172.27.155.65:48298 - default

any help would be appreciated.

@linsun / @rlenglet is this issues still opened or there is a clear explanation on how to configure istio to preserve ingress connections source IPs?
We do have a problem to get the source ip in our kafka deployment.

@FrimIdan This issue is still opened. @gmemcc's attempt to use original_src is a priori going in the right direction.

@rlenglet original_src did work, but extra iptables rule needed, for example:

iptables -t mangle -I OUTPUT 1 -s 127.0.0.1/32 ! -d 127.0.0.1/32 \
    -j MARK --set-xmark 0x539/0xffffffff

with listener filter original_src enabled, envoy will connect to local process with real client ip instead of 127.0.0.1, but resoponding packet will be routed through eth0 by default.

policy routing already set up in TPROXY mode, so only a rule is required.

if this solution is ok, i can submit a PR.

PR is welcome

@gmemcc +1, this solution looks good! Please go ahead with a PR. I'm expecting this to be a significant PR, since it will touch the iptables code (and the same change will also need to be replicated into the istio/cni repo's istio-iptables.sh script), Pilot to setup original_src, etc.

My main concern is that we have zero automated test coverage on TPROXY mode, so I'd appreciate if you could at least document how to test this manually. At least we should have unit test coverage for those changes in istio/istio.

I think it is safe to close this issue, given the PR merged? Please re-open if not.

This is not addressed yet. #28457 is to resolve it.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

emedina picture emedina  Â·  130Comments

hzxuzhonghu picture hzxuzhonghu  Â·  96Comments

hamon-e picture hamon-e  Â·  59Comments

Stono picture Stono  Â·  65Comments

yacut picture yacut  Â·  73Comments