Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature Request
If this is a FEATURE REQUEST, please:
I'd like to be able to use the name of another service in my cluster as the auth-url. Being able to do ingress.kubernetes.io/auth-url: "http://auth-service" would be convenient.
I naively tried doing this and it didn't work as the nginx-ingress pod dns wasn't able to resolve the name.
As a work-around, I'm currently setting the clusterip on the auth-service and using that as the url
For example:
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
ports:
- port: 80
targetPort: 80
selector:
k8s-app: auth-pod
clusterIP: 10.11.240.88
And then set ingress.kubernetes.io/auth-url: "http://10.11.240.88"
@jobevers please use the full name: auth-service.default.svc.cluster.local
Closing. Please reopen if the error persists after you use the full service name
Sorry the long delay in getting back to this. I tried using the full name and wasn't able to get this to work. I think the resolv.conf might not be right for the ingress (scroll to the bottom).
What I did:
Set the auth-url: ingress.kubernetes.io/auth-url: http://demo-auth-service.default.svc.cluster.local
Then, when making a request, I see in the logs:
2017/11/15 22:28:55 [error] 455#455: *35 demo-auth-service.default.svc.cluster.local could not be resolved (3: Host not found), client: 108.26.195.69, server: [...], request: "GET /review HTTP/2.0", subrequest: "/_external-auth-L3BhdGllbnQtcmV2aWV3", host: [...]
So, I start debugging:
Save the ingress controller:
export INGRESS=$(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ")
Looking at the resulting nginx config:
kubectl -n kube-system exec -it $INGRESS -- cat /etc/nginx/nginx.conf | less
I see a bunch of sections that look like:
location = /_external-auth-L3BhdGllbnQtcmV2aWV3 {
internal;
set $proxy_upstream_name "internal";
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_pass_request_headers on;
proxy_set_header Host demo-auth-service.default.svc.cluster.local;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Auth-Request-Redirect $request_uri;
proxy_ssl_server_name on;
client_max_body_size "500m";
set $target http://demo-auth-service.default.svc.cluster.local;
proxy_pass $target;
}
Check if I can access the demo-auth-service:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#how-do-i-test-if-it-is-working
$ kubectl exec -ti busybox -- nslookup demo-auth-service
Server: 10.19.240.10
Address 1: 10.19.240.10 kube-dns.kube-system.svc.cluster.local
Name: demo-auth-service
Address 1: 10.19.250.98 demo-auth-service.default.svc.cluster.local
Checking if I can access the auth service from ingress. It works if I use the IP address
$ kubectl -n kube-system exec -it $INGRESS -- curl http://10.19.250.98
{
"description": "Authorization header is missing",
"error": "Authorization Required"
}
But not if I use the name:
$ kubectl -n kube-system exec -it $INGRESS -- curl http://demo-auth-service.default.svc.cluster.local
curl: (6) Could not resolve host: demo-auth-service.default.svc.cluster.local
command terminated with exit code 6
Curious, I checked the resolv.conf:
$ kubectl -n kube-system exec -it $INGRESS -- cat /etc/resolv.conf
nameserver 169.254.169.254
search c.pd-playground.internal google.internal
which looks different then on the pod where I was able to resolve the auth service:
$ kubectl exec -ti busybox -- cat /etc/resolv.conf
nameserver 10.19.240.10
search default.svc.cluster.local svc.cluster.local cluster.local c.pd-playground.internal google.internal
options ndots:5
Setting dnsPolicy: ClusterFirstWithHostNet should solve the issue, if ingress is using hostNetwork
Same problem. The above doesn't seem to fix it.
What's curious is that the ingress pod can resolve its own address with no problem; if I exec /bin/sh on the ingress controller pod, I can do a curl on ingress-nginx.ingress-nginx (its service name and namespace), but it gets a 500 on an authenticated endpoint and the logs indicate the name lookup error. If I do the same, but via just the IP of the ingress (we aren't using hostnames anywhere yet), it works just fine and can redirect all the way to the OAuth login page.
I tried editing the ingress controller's Deployment to change the dnsPolicy; it didn't break anything, but it didn't fix it, either. No change. I verified that it renewed the Pod with the updated dnsPolicy.
Side note: why is nginx trying to look that up just to issue a 302? Seems to me that's the core of the problem, and is wholly unnecessary.
Has it been fixed already?