Minikube: DNS not working inside minikube pods since 23.6

Created on 11 Dec 2017  路  27Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Bug report

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.24.1

  • OS (e.g. from /etc/os-release): Mac OS 10.12.6
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.23.6.iso
  • Install tools: Homebrew
  • Others: Kubernetes version 1.7.5, containers built using Alpine 3.6

What happened:

I first noticed this issues when I wasn't able to reach Dockerhub inside a pod, but it looks like DNS is failing globally. I can ping 8.8.8.8, but if I try to reach anything using DNS, it fails.

/networking # nslookup registry-1.docker.io
nslookup: can't resolve '(null)': Name does not resolve
/networking # nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve
/ # cat /etc/resolv.conf
nameserver 10.0.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

Pinging 10.0.0.10 also fails to connect.

This used to work when I was on version .21. It's broken since I upgraded to .23.6 and .24.1.

What you expected to happen:

DNS to work.

How to reproduce it (as minimally and precisely as possible):

Start a pod using this yaml with the API version added:

kind: Pod
metadata:
  generateName: networking-
  labels:
    test: test
spec:
  containers:
  - image: ceridwen/networking:v1
    imagePullPolicy: Always
    name: networking
    readinessProbe:
      tcpSocket:
        port: 5000
      initialDelaySeconds: 5
      periodSeconds: 1
    restartPolicy: Always

I also saw this using the docker:stable-dind image.

Output of minikube logs (if applicable):

I didn't see anything in the logs that looked relevant, I can post them if needed.

aredns ckube-dns cvirtualbox kinbug omacos

Most helpful comment

Same issue:

Linux MInt 18.3
Minikube: 0.25.0
Kubernetes: v1.9.2

All 27 comments

I cannot reproduce it on minikube 0.24.1 (ISO 0.23.6), Fedora 27, VirtualBox 5.2.2 and Kubernetes 1.8.0.

$ kubectl -n kube-system get svc kube-dns 
NAME       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   10.96.0.10   <none>        53/UDP,53/TCP   3m

$ kubectl run -ti --image=busybox --restart=Never busybox 
If you don't see a command prompt, try pressing enter.

/ # ping -c4 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=61 time=29.491 ms
64 bytes from 8.8.8.8: seq=1 ttl=61 time=29.145 ms
64 bytes from 8.8.8.8: seq=2 ttl=61 time=29.454 ms
64 bytes from 8.8.8.8: seq=3 ttl=61 time=28.902 ms

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 28.902/29.248/29.491 ms

/ # ping -c4 www.google.com
PING www.google.com (216.58.202.164): 56 data bytes
64 bytes from 216.58.202.164: seq=0 ttl=61 time=20.488 ms
64 bytes from 216.58.202.164: seq=1 ttl=61 time=62.319 ms
64 bytes from 216.58.202.164: seq=2 ttl=61 time=21.130 ms
64 bytes from 216.58.202.164: seq=3 ttl=61 time=20.603 ms

--- www.google.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 20.488/31.135/62.319 ms

/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # nslookup google.com 10.96.0.10
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      google.com
Address 1: 2800:3f0:4001:816::200e
Address 2: 216.58.202.174 gru06s30-in-f174.1e100.net

I see the same behavior with k8s v1.8.0 (minikube version: v0.24.1, iso version 0.23.6). Maybe it is related to MacOS only?

I'm having similar issues on Mac, I outlined them at the bottom of this bug:
https://github.com/kubernetes/minikube/issues/2240

Seems the cause was tracked down here:
https://github.com/kubernetes/minikube/issues/2280

I can reproduce the issue with k8s v1.7.5 on Linux.. It doesn't seem to be a Mac-only issue.

I believe the issue is related to upgrading an existing minikube cluster. I helped someone debug this in slack. The dns IP changed in /etc/resolv.conf of pods in the upgrade, but the actual kube-dns service's clusterIP wasn't updated to match

I completely tear down and recreate for my use case and still have an issue.

馃挴 it has to do with pulling the latest version of the minikube iso (0.23.6). Downgrading to 0.23.5 we no longer have this issue.

2 of us at our company are able to reproduce on Mac OSX 10.12.6 using k8s v1.8.0.

~Doesn't matter which bootstrapper we use (kubeadm or localkube) or which VM driver.~ We've tried every possible combo of both.

EDIT: It does work with kubeadm and the default Virtualbox driver.

You can specify the actual Minikube ISO version like so:

--iso-url=https://storage.googleapis.com/minikube/iso/minikube-v0.23.5.iso

As an aside, any reason the ISO semantic version is different than the minikube semantic version? That through us for a loop as well... 馃槃

Ok. @corymsmith and I have both verified that you don't need to downgrade the Minikube ISO on Mac OS X if you are using Virtual Box and kubeadm for Minikube v0.24.1. (Minikube ISO v0.23.6).

We've not been able to get kubeadm to play nicely with any other hypervisor on OS X.

DNS broken for me as well after upgrading to minikube 0.24.1 (probably broke in a preceding version) on macOS 10.12.6.

Solved by recreating the cluster by minikube delete (warning: this will remove all of your data) followed by a minikube start.

@norbertmocsnik it works for you know because you pulled kubernetes 1.8 when recreating your cluster. If you pull 1.7.5, you'll continue to have the same problem.

I'm using 0.25.0 and Kubernetes 1.9.0 on MacOS and also have this issue. kube-dns is enabled and running.

Same problem on:

Minikube: v0.25.0
Kubernetes: v1.9.2
Mac OS: v10.13.3

I'm haveing the same issue in Windows 10.

Minikube: 0.25.0
Kubernetes: v1.9.0

Same issue:

Linux MInt 18.3
Minikube: 0.25.0
Kubernetes: v1.9.2

Same here

macOS: 10.12.6
minikube: v0.26.1
kubernetes: v1.10.0
kubectl: v1.10.1

tried with vm-driver: xhyve, virtualbox, hyperkit same result,
dns-addon: kube-dns, containers base image: alpine:3.7

On Minishift's end we have similar issues when we use the Minikube ISO.

/cc: @praveenkumar @LalatenduMohanty

Running under macOS I've had the same trouble and the fix is stopping minikube and starting it again if your laptop ever sleeps or locks... Little bit painful but at least it will get it working.

Running:

macOS: 10.12.6
minikube: v0.26.1
kubernetes: v1.10.0
kubectl: v1.10.1

I was facing the same problem and solved with this:

$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
  • minikube version: v0.25.0
  • kubernetes version: 1.8
  • kubectl version: 1.9.2
  • mac osx: 10.13.4

I was getting CrashLoopBackOff for kube-dns-7bb84f958d-6sglb pod and these logs:

$ kubectl logs kube-dns-7bb84f958d-6sglb -n kube-system -c kubedns
I0426 13:47:09.970362       1 dns.go:48] version: 1.14.4-2-g5584e04
I0426 13:47:09.971049       1 server.go:66] Using configuration read from ConfigMap: kube-system:kube-dns
I0426 13:47:09.971091       1 server.go:113] FLAG: --alsologtostderr="false"
I0426 13:47:09.971101       1 server.go:113] FLAG: --config-dir=""
I0426 13:47:09.971106       1 server.go:113] FLAG: --config-map="kube-dns"
I0426 13:47:09.971109       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0426 13:47:09.971112       1 server.go:113] FLAG: --config-period="10s"
I0426 13:47:09.971117       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0426 13:47:09.971121       1 server.go:113] FLAG: --dns-port="10053"
...
I0426 13:47:13.986287       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
E0426 13:47:13.987510       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list endpoints at the cluster scope
E0426 13:47:13.987889       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
E0426 13:47:13.989914       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
...

I hope this helps.

I don't even have kube-dns pods even though the addon is enabled.

What did I do?

  • I started minikube (just as it is, no further settings).
  • I enabled the kube-dns addon.
  • I did minikube stop && minikube delete.
  • I upgrade k8s to v1.10.0 and started minikube again.
    Addon is still enabled but no pods are there. I'm unable to enable the addon again.
$ minikube addons disable kube-dns
minikube is not currently running so the service cannot be accessed

(same for enable)

I had the very same issue as @chilcano posted https://github.com/kubernetes/minikube/issues/2302#issuecomment-384653456.

To solve my issue I've also created the cluster role binding but in my case it was for the namespace:user kube-system:kube-dns

in short, kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:kube-dns

the dns issue is yet not resolved in my case, but now logs looks normal:

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns

I0831 18:43:01.026922       1 dns.go:48] version: 1.14.4-2-g5584e04
I0831 18:43:01.035551       1 server.go:66] Using configuration read from ConfigMap: kube-system:kube-dns
I0831 18:43:01.035882       1 server.go:113] FLAG: --alsologtostderr="false"
I0831 18:43:01.035913       1 server.go:113] FLAG: --config-dir=""
I0831 18:43:01.036071       1 server.go:113] FLAG: --config-map="kube-dns"
I0831 18:43:01.036084       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0831 18:43:01.036087       1 server.go:113] FLAG: --config-period="10s"
I0831 18:43:01.036093       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0831 18:43:01.036146       1 server.go:113] FLAG: --dns-port="10053"
I0831 18:43:01.036206       1 server.go:113] FLAG: --domain="cluster.local."
I0831 18:43:01.036295       1 server.go:113] FLAG: --federations=""
I0831 18:43:01.036314       1 server.go:113] FLAG: --healthz-port="8081"
I0831 18:43:01.036460       1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0831 18:43:01.036474       1 server.go:113] FLAG: --kube-master-url=""
I0831 18:43:01.036479       1 server.go:113] FLAG: --kubecfg-file=""
I0831 18:43:01.036482       1 server.go:113] FLAG: --log-backtrace-at=":0"
I0831 18:43:01.036533       1 server.go:113] FLAG: --log-dir=""
I0831 18:43:01.036539       1 server.go:113] FLAG: --log-flush-frequency="5s"
I0831 18:43:01.036542       1 server.go:113] FLAG: --logtostderr="true"
I0831 18:43:01.036544       1 server.go:113] FLAG: --nameservers=""
I0831 18:43:01.036547       1 server.go:113] FLAG: --stderrthreshold="2"
I0831 18:43:01.036551       1 server.go:113] FLAG: --v="2"
I0831 18:43:01.036630       1 server.go:113] FLAG: --version="false"
I0831 18:43:01.036676       1 server.go:113] FLAG: --vmodule=""
I0831 18:43:01.036791       1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0831 18:43:01.037181       1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0831 18:43:01.037587       1 dns.go:147] Starting endpointsController
I0831 18:43:01.037634       1 dns.go:150] Starting serviceController
I0831 18:43:01.042604       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0831 18:43:01.042636       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0831 18:43:01.346127       1 sync_configmap.go:107] ConfigMap kube-system:kube-dns was created
I0831 18:43:01.843537       1 dns.go:171] Initialized services and endpoints from apiserver
I0831 18:43:01.843581       1 server.go:129] Setting up Healthz Handler (/readiness)
I0831 18:43:01.843593       1 server.go:134] Setting up cache handler (/cache)
I0831 18:43:01.843605       1 server.go:120] Status HTTP port 8081

minikube version: v0.28.2
kubernetes version: 1.10.0
kubectl version: 1.11.2
Ubuntu 16.04.5 LTS
VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): none
ISO version: (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO):

I have the same issue:

minikube version: v0.29.0
kubernetes version: 1.10.0
kubectl version: 1.12.0
Ubuntu 18.04.1 LTS
VM Driver: none
kube-dns addon: enabled

I have a CrashLoopBackOff of coredns with this log:


.:53
2018/09/28 14:54:48 [INFO] CoreDNS-1.2.2
2018/09/28 14:54:48 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/09/28 14:54:48 [INFO] plugin/reload: Running configuration MD5 = 486384b491cef6cb69c1f57a02087363
2018/09/28 14:54:48 [FATAL] plugin/loop: Seen "HINFO IN 9173194449250285441.798654804648663467." more than twice, loop detected

@lenlen I think that is related to https://github.com/coredns/coredns/issues/2087 when minikube is with vm=none.

Edit: it is fixed in K8 1.11 https://github.com/kubernetes/kubeadm/issues/845

With systemd seems that we still need to care of this kubelet parameter https://github.com/kubernetes/kubeadm/issues/845

@lenlen Did you find a way to resolve this? Also hitting the same issue now..

I previously had a crash loop happening on kube-dns but fixed it using @chilcano 's workaround here.

@tjohnston-cd The problem was https://github.com/kubernetes/minikube/issues/2575#issuecomment-430252377

Closing as obsolete. minikube now uses CoreDNS rather than kube-dns. If you are seeing a similar behaviors, it's probably due to a different root cause - so please open a new bug. Thanks!

Was this page helpful?
0 / 5 - 0 ratings