TL;DR the Helm chart for ingress-nginx should set externalTrafficPolicy: "Local" to enable source IP capture with X-Forwarded-For and X-Real-IP.
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.): No
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): forwarded headers, x-forwarded-for
Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug
NGINX Ingress controller version: https://hub.helm.sh/charts/stable/nginx-ingress/1.12.1
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T14:25:20Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.8", GitCommit:"7d3d6f113e933ed1b44b78dff4baf649258415e5", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:16Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
$ curl https://staging.example.com/i/echo -i
HTTP/2 200
date: Tue, 06 Aug 2019 09:40:53 GMT
content-type: text/plain
set-cookie: __cfduid=snip; expires=Wed, 05-Aug-20 09:40:53 GMT; path=/; domain=.example.com; HttpOnly
vary: Accept-Encoding
strict-transport-security: max-age=15724800; includeSubDomains; preload
referrer-policy: no-referrer
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 50200d473ce9caec-ARN
Request served by echo-d577bb9d5-gwxpj
HTTP/1.1 GET /i/echo
Host: staging.example.com
X-Original-Forwarded-For: 84.217.103.138
User-Agent: curl/7.54.0
X-Real-Ip: 10.0.3.209
Cf-Visitor: {"scheme":"https"}
Cf-Connecting-Ip: 84.217.103.138
Cdn-Loop: cloudflare
X-Forwarded-For: 10.0.3.209
X-Forwarded-Port: 443
Accept-Encoding: gzip
Cf-Ipcountry: SE
X-Original-Uri: /i/echo
X-Scheme: https
Cf-Ray: 50200d473ce9caec-ARN
Accept: */*
X-Request-Id: 24fe2a1ec3eb68c6e7cd3a4956afc8bd
X-Forwarded-Host: staging.example.com
X-Forwarded-Proto: https
Environment:
uname -a): noneWhat happened:
I was expecting this config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
# https://support.cloudflare.com/hc/en-us/articles/200170986-How-does-Cloudflare-handle-HTTP-Request-headers-
# https://danielmiessler.com/blog/getting-real-ip-addresses-using-cloudflare-nginx-and-varnish/
# Alt https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr
# + use-proxy-protocol
# https://www.cloudflare.com/ips/
# https://stackoverflow.com/questions/56489147/how-to-restore-original-client-ip-from-cloudflare-with-nginx-ingress-controller
use-forwarded-headers: "true"
forwarded-for-header: "Cf-Connecting-Ip"
proxy-real-ip-cidr: "173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32"
To change the seen X-Forwarded-For header to be that of Cf-Connecting-Ip but it does not.
EDIT: setting proxy-real-ip-cidr: "0.0.0.0/0" makes it all work; but is insecure. This is because $remote_addr is taken to be 10.0.3.206, i.e. k8s-internal IP, despite the nginx controller service resource having this runtime state due to this: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
New nginx ingress, set up cloudflare proxy in front, https via let's encrypt on both, then the above configmap.
Anything else we need to know:
Full repro config:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
heritage: Tiller
release: nginx-ingress
name: nginx-ingress
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
heritage: Tiller
release: nginx-ingress
name: nginx-ingress
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader-nginx
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
heritage: Tiller
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
heritage: Tiller
release: nginx-ingress
name: nginx-ingress
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
heritage: Tiller
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: kube-system
---
apiVersion: v1
data:
referrer-policy: no-referrer
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
kind: ConfigMap
metadata:
name: nginx-ingress-add-headers
namespace: kube-system
---
apiVersion: v1
data:
add-headers: kube-system/nginx-ingress-add-headers
forwarded-for-header: Cf-Connecting-Ip
hsts-include-subdomains: "true"
hsts-preload: "true"
proxy-real-ip-cidr: 173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32
use-forwarded-headers: "true"
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: kube-system
spec:
loadBalancerIP: 35.228.195.116
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller-metrics
namespace: kube-system
spec:
ports:
- name: metrics
port: 9913
targetPort: metrics
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller-stats
namespace: kube-system
spec:
ports:
- name: stats
port: 18080
targetPort: stats
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: default-backend
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-default-backend
namespace: kube-system
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: default-backend
release: nginx-ingress
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: kube-system
spec:
minReadySeconds: 0
replicas: 3
revisionHistoryLimit: 10
strategy: {}
template:
metadata:
labels:
app: nginx-ingress
component: controller
release: nginx-ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/nginx-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 18080
name: stats
protocol: TCP
- containerPort: 10254
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
dnsPolicy: ClusterFirst
hostNetwork: false
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: default-backend
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-default-backend
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx-ingress
component: default-backend
release: nginx-ingress
spec:
containers:
- args: null
image: k8s.gcr.io/defaultbackend-amd64:1.5
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-default-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 6
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
resources: {}
securityContext:
runAsUser: 65534
terminationGracePeriodSeconds: 60
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
app: nginx-ingress
component: controller
release: nginx-ingress
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: kube-system
spec:
maxReplicas: 3
metrics:
- resource:
name: cpu
targetAverageUtilization: 50
type: Resource
- resource:
name: memory
targetAverageUtilization: 50
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: nginx-ingress-controller
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Why should I be responsible for bumping your issues? If you want to keep issues unresolved, please do close them without acting on them.
/remove-lifecycle stale
I am also having this problem and I think it would be a nice feature.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Updating the config to externalTrafficPolicy: "Local" immediately causes connection timeouts. I think it's because i have multiple nodes, and a bunch of deployments with 1 replica, and if the destination pod doesn't exist on the same node as nginx-ingress, a connection timeout errors?
I'm using nginx-ingress instead of ingress-nginx, but I believe the underlying issue is the same.
Most helpful comment
I am also having this problem and I think it would be a nice feature.