Minikube: DNS lookup not working when starting minikube with --dns-domain

Created on 3 Jul 2017  路  20Comments  路  Source: kubernetes/minikube

Minikube version 0.19.1

Environment:

  • OS MAC OS 10.11.4
  • VM Driver virtualbox
  • ISO version minikube-1.13.1-3.iso:
  • Install tools:
  • Others:

What happened:
If I start minikube with --dns-domain the dns entries that are registered are still registered with the domain cluster.local

What you expected to happen:
I would expect that the dns entries are registered with domain provided in the startup parameters.

How to reproduce it (as minimally and precisely as possible):
Start minikube with --dns-domain test.cluster.local
do nslookup kubernetes.default.svc.test.cluster.local 10.0.0.1 -> does not work
do nslookup kubernetes.default.svc.cluster.local 10.0.0.1 -> works but is not expected to work

Anything else do we need to know:
To me it looks like this is caused by the hard coded cluster.local in https://github.com/kubernetes/minikube/blob/master/deploy/addons/kube-dns/kube-dns-controller.yaml

kinbug

Most helpful comment

This is my work around.

# enable dns
$ minikube start
$ minikube addons enable kube-dns
$ minikube addons list
$ eval $(minikube docker-env)

$ kubectl get svc kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   19d

$ minikube ssh
$ sudo vi /etc/resolv.conf
# add a line in resolv.conf
# nameserver 10.96.0.10

$ nslookup kube-dns.kube-system.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system.svc.cluster.local
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

All 20 comments

Looks like we need to propagate the --dns-domain flag into the dns addon. I think this should be doable on minikube start.

Also if I start minikube with the --extra-config=kubelet.ClusterDomain=mydomain.com the dns records are still registered with the cluster.local suffix.

Given that modern distros claim .local for avahi (grmbl grmbl) changing the dns name is kinda important for routing into the cluster (It's not like we can use ELBs in minikube). I feel this should be a high priority issue.

This is my work around.

# enable dns
$ minikube start
$ minikube addons enable kube-dns
$ minikube addons list
$ eval $(minikube docker-env)

$ kubectl get svc kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   19d

$ minikube ssh
$ sudo vi /etc/resolv.conf
# add a line in resolv.conf
# nameserver 10.96.0.10

$ nslookup kube-dns.kube-system.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system.svc.cluster.local
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

@reymont
how were you able to run minikube addons enable kube-dns after minikube stop?

$ minikube addons enable kube-dns
    minikube is not currently running so the service cannot be accessed

EDIT:
The add-on is enabled:

$  minikube addons list
- default-storageclass: enabled
- kube-dns: enabled
...

The dns service you are seeing does not appear.

$ kubectl get service -n kube-system
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.102.204.117   <none>        80:30000/TCP   7m

@trobert2 Yes, you are right. minikube addons enable kube-dns must execute after minikube start. I'm fix it. Thank you

EDIT:

Make sure the kube-dns pods STAUTS is runing.

kubectl get pods -n kube-system
kube-dns-86f6f55dd5-gkd2q       3/3       Running             30         24d

i can reproduce it with v 1.9.0 too.
The inter-pod communication doesn't work.

minikube version: v0.25.0

kubectl apply -f registry-gateway-deployment.yml -f registry-gateway-service.yaml --record
kubectl apply -f auth-deployment.yaml -f auth-service.yaml --record

kube kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
auth-68c7556479-7wpdn         1/1       Running   0          2h
infra-apps-6ccccd558b-wk7pb   1/1       Running   0          2h

kubectl exec auth-68c7556479-7wpdn -it -- bash

I am trying to do an nslookup for the infra-apps service from the auth pod.

nslookup infra-apps.default.svc.cluster.local

nslookup: can't resolve '(null)': Name does not resolve


Name:      infra-apps.default.svc.cluster.local
Address 1: 10.98.32.142 infra-apps.default.svc.cluster.local

when minikube ssh enter, then edit /etc/resolv.conf,
then do any deployment change, will restore the nameserver to 10.0.2.3,

anyone find it, so the method is invalid.

anyone have another.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

in your minikube VM:

Change /etc/systemd/resolved.conf
From:

DNS=

To:
DNS=

then
sudo systemctl restart systemd-resolved

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

The issue is relevant for 0.30.0

in your minikube VM:

Change /etc/systemd/resolved.conf
From:

DNS=

To:
DNS=

then
sudo systemctl restart systemd-resolved

How do you get this sort of change to persist? Ideally, I'd like to just copy the /etc/systemd/resolved.conf from the host.

experienced the same error with minikube + win10

my workaround (followed @gitdr suggestion):

$ minikube ssh <-- git bash windows
$ sudo su - <-- within the minikube environment
# cat << EOF > /etc/systemd/resolved.conf
> [Resolve]
> DNS=8.8.8.8
> FallbackDNS=8.8.8.8 8.8.4.4
> EOF

# systemctl restart systemd-resolved

-hth

/remove-lifecycle rotten

/reopen

@ThisIsQasim: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

I can confirm that this error still persists with the hyperkit driver on macOS. Editing systemd-resolved fixes it.

The issue is relevant for minikube v1.12.3. This way helped me.

minikube start --dns-domain='custom.domain' --extra-config='kubelet.cluster-domain=custom.domain'
Was this page helpful?
0 / 5 - 0 ratings