When using a variable to proxy to an internal Openshift service within an nginx container's proxy_pass config, NGINX can't resolve the service's DNS due to a required resolver. For instance:
location /api/ {
set $pass_url http://service.namespace.svc:8080$request_uri;
proxy_pass $pass_url;
}
When using standard Kubernetes, I can use kube-dns.kube-system.svc.cluster.local as the resolver:
resolver kube-dns.kube-system.svc.cluster.local;
But Openshift doesn't provide this. I've tried using the IP that is in the container's /etc/resolv.conf, which is just one of the nodes in my cluster that is running the DNS server, but it still can't resolve.
Weirdest part is nslookup service.namespace.svc from inside the container terminal uses the nameserver in /etc/resolv.conf and works fine.
Is there an equivalent to the Kubernetes DNS hostname in Openshift I could use?
openshift v3.11.0+d0c29df-98
kubernetes v1.11.0+d4cacc0
location /api/ {
set $pass_url http://service.namespace.svc:8080$request_uri;
proxy_pass $pass_url;
}
ps aux | grep nginx to find the master PID and reload NGINX via kill -HUP <pid>/api/NGINX can't resolve internal hostname.
NGINX proxies to the internal service.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
I am facing the exact same problem. So I'll just watch this issue, if anybody comes up with a solution. :)
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen.
Mark the issue as fresh by commenting/remove-lifecycle rotten.
Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
having the same issue, unable to use kube-dns.kube-system.svc.cluster.local as the resolver
@ainiml: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
having the same issue, unable to use
kube-dns.kube-system.svc.cluster.localas the resolver
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
Having the same problem
@omnibrain: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Having the same problem
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Anyone has resolved this? I read to use the DNS at master-ip port 8053, but it still fails to resolve (nginx, doing a manual curl from inside the nginx container does work)
Anyone has resolved this? I read to use the DNS at master-ip port 8053, but it still fails to resolve (nginx, doing a manual curl from inside the nginx container does work)
I wonder if there's been a change to docker
I used to could ping other containers by their docker container name, but not anymore now
Most helpful comment
I am facing the exact same problem. So I'll just watch this issue, if anybody comes up with a solution. :)