Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Please provide the following details:
Environment: Ubuntu 18.04, fresh install
Minikube version (use minikube version
): v0.32.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): Nonecat ~/.minikube/machines/minikube/config.json | grep -i ISO
or minikube ssh cat /etc/VERSION
): ?minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver:";
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json
What happened:
CoreDNS CrashLoopBackOff. Log shows:
[FATAL] plugin/loop: Seen "HINFO IN xxxxxxxxx." more than twice, loop detected
Related issue in CoreDNS: https://github.com/coredns/coredns/issues/2087
What you expected to happen:
Expected it to work!
How to reproduce it (as minimally and precisely as possible):
Deploy minikube on Ubuntu 18.04 with "None" driver
Output of minikube logs
(if applicable):
Anything else do we need to know:
Solution:
Add instructions to disable systemd-resolved and use dnsmaq. This worked for me.
sudo apt-get install dnsmasq
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved
sudo nano /etc/NetworkManager/NetworkManager.conf
# add under [main]
# dns=dnsmasq
sudo cp /etc/resolv.conf /etc/resolv.conf.bak
sudo rm /etc/resolv.conf; sudo ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf
sudo systemctl start dnsmasq
sudo systemctl restart NetworkManager
Thanks for the info! coredns/coredns#2087 and https://stackoverflow.com/questions/53075796/coredns-pods-have-crashloopbackoff-or-error-state/53414041#53414041 were very helpful in understanding this issue. The basic problem as I can understand it is that your machine is already running a DNS server, and that it's causing a feedback loop with CoreDNS.
I'm still not sure on the best way to go about resolving this issue. You mention that you are on Ubuntu 18.04, which means this answer could be implementable:
Do you mind sharing the output of:
ps -afe | grep kubelet
and:
systemctl list-unit-files | grep enabled | egrep -i 'resolv|dns'
At a minimum, minikube should be able to detect this awkward configuration and warn about it instead of generating a confusing error.
@tstromberg I have added my own solution that worked into the bottom of my issue. I believe Ubuntu 18.04 by default runs systemd-resolved
which is the issue and can be disabled. I believe a simple note somewhere would suffice.
Here are the output of my current setup (which works after disabling systemd-resolved). When I attempt another clean setup I will post the output of the "broken" clean install.
>> ps -afe | grep kubelet
root 1444 1 7 Jan08 ? 1-03:11:26 /usr/bin/kubelet --authorization-mode=Webhook --client-ca-file=/var/lib/minikube/certs/ca.crt --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cgroup-driver=cgroupfs --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=minikube --feature-gates=DevicePlugins=true
kubeflow 21728 21655 0 22:45 pts/0 00:00:00 grep --color=auto kubelet
root 24881 24861 4 Jan18 ? 06:26:16 kube-apiserver --authorization-mode=Node,RBAC --enable-admission-plugins=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --feature-gates=DevicePlugins=true --advertise-address=172.17.37.244 --allow-privileged=true --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
>> systemctl list-unit-files | grep enabled | egrep -i 'resolv|dns'
dns-clean.service enabled
dnsmasq.service enabled
pppd-dns.service enabled
i just started playing on minikube and i just bumped into this while i am running on 18.10. should we at least start documenting this bits so people know what to do ?
maybe a troubleshooting or FAQ section ?
On my Ubuntu 18.04, starting minikube with a kubelet.resolv-conf
option fixed this. This is basically porting the fix from https://github.com/coredns/coredns/issues/2087 to minikube config.
minikube --vm-driver=none start \
--extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
That worked for me - same symptoms as above. Running Ubuntu 18.04.1.
Note that minikube will allow you to do a "soft" restart ie without a stop, but this did not solve the problem for me. Reason is that a soft restart does not actually restart the minikube kubelet when vm-driver=none.
Problem was fixed after I did minikube stop
then minikube start
(with the extra-config arg described in another reply). Interestingly, none of the kube-system namespace pods get restarted by a minikube stop/start cycle. This makes sense for vm-driver=none, because all the pods are actually processes running straight on the host.
closing this , as this issue should have been solved by this PR. https://github.com/kubernetes/minikube/pull/4465
please reopen if the issue is still there.
@medyagh I am still facing the same issue
Minikube version = 1.2.0
Ubuntu 18.0.2 LTS
sudo minikube --vm-driver=none start --cpus 8 --memory 8048
kubectl logs coredns-7559cdd6f8-tg9c5
.:53
2019/07/29 02:21:59 [INFO] CoreDNS-1.2.2
2019/07/29 02:21:59 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2019/07/29 02:21:59 [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
2019/07/29 02:21:59 [FATAL] plugin/loop: Seen "HINFO IN 400104270526716248.6901523470932003903." more than twice, loop detected
Edit: Seems like something wrong in my setup, Tried this on a clean Ubuntu VM and it works fine. Will need to debug my setup now
for running minikube inside a LXC container with ubuntu 20.04(focal)
minikube start --vm-driver=none --extra-config kubeadm.ignore-preflight-errors=SystemVerification --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
Most helpful comment
On my Ubuntu 18.04, starting minikube with a
kubelet.resolv-conf
option fixed this. This is basically porting the fix from https://github.com/coredns/coredns/issues/2087 to minikube config.