Minikube version (use minikube version): v0.23.0
Linux xps 4.9.58 #1-NixOS SMP Sat Oct 21 15:21:39 UTC 2017 x86_64 GNU/Linuxcat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.23.6.isoWhat happened:
Failed to pull image "docker-registry-luminous-parrot:4000/todo-list@sha256:c3fb64353659cad2e6e96af7b6d5e3e58340af74108a3e2b663f6df77debd872": rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry-luminous-parrot:4000/v2/: dial tcp: lookup docker-registry-luminous-parrot on 10.0.2.3:53: no such host
even though this service is available:

when I ssh into a pod in the same namespace:
# nslookup docker-registry-luminous-parrotServer: 10.0.0.10
Address: 10.0.0.10#53
Name: docker-registry-luminous-parrot.default.svc.cluster.local
Address: 10.0.0.178
when I read /etc/resolv.conf from minikube:
$ minikube ssh
$ cat /etc/resolv.conf
nameserver 10.0.2.3
It looks like minikube has the wrong dns server. 10.0.0.10 finds the service correctly.
What you expected to happen:
I expect kubernetes to be able to pull the image based on that registry host name.
How to reproduce it (as minimally and precisely as possible):
minikube start --insecure-registry 10.0.0.0/24 --disk-size 60g
helm init
helm install incubator/docker-registry
# push an image to the registry
# try to create a deployment with the image using the registry
as a test, on minikube host, I updated /etc/systemd/resolvd.conf, adding
DNS=10.0.0.10
and then did systemctl restart systemd-resolved.
on minikube host:
$ nslookup docker-registry-luminous-parrot
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'docker-registry-luminous-parrot'
in a pod:
# nslookup docker-registry-luminous-parrot
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: docker-registry-luminous-parrot.default.svc.cluster.local
Address: 10.0.0.178
I too have an issue with this. A clean setup ends up with the DNS set to 10.0.2.3 which is not the ip of the kube-dns service (10.96.0.10). Any idea why this is happening?
This is my work around.
https://github.com/kubernetes/minikube/issues/1674#issuecomment-354391917
I'm also having an issue with the wrong dns server being set in /etc/resolv.conf.
same to me,
Yeah, this bit me too. Is there any reason not to add kube-dns to resolv.conf (as the first entry)?
I guess it needs to be done as part of the kube-dns add-on, which perhaps complicates things. Otherwise it seems like a simple fix?
@r2d4 What's the more-info-needed label about?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I still see this issue with minikube v0.28.0 and kube 1.10.0.
Combining @andrewrk and @reymont solutions worked for me as a workaround.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
I still have this issue that I can push to the internal registry but I can't use that image within a Deployment:
Warning Failed 4h6m (x2 over 4h6m) kubelet, minikube Failed to pull image "registry.kube-system.svc.cluster.local/k8spatterns/random-generator": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.kube-system.svc.cluster.local/v2/: dial tcp: lookup registry.kube-system.svc.cluster.local on 192.168.64.1:53: no such host
Actually, my question is, how is the registry addon supposed to work? Are images stored in this registry supposed to work as images of a Pod?
/remove-lifecycle rotten.
I have exactly the same issue (and question) as rhuss
Here's my solution that doesn't use the (now removed) kube-dns addon and doesn't use a hardcoded IP address.
Run this script from outside Minikube after every Minikube startup:
DNS=$(kubectl get service/kube-dns --namespace kube-system --template '{{.spec.clusterIP}}')
CONFIGURED=$(echo "[Resolve]\nDNS=$DNS" | base64)
CURRENT=$(minikube ssh "cat /etc/systemd/resolved.conf | base64" | tr -d "\r")
if [ "$CURRENT" != "$CONFIGURED" ]; then
minikube ssh "echo $CONFIGURED | base64 --decode | sudo tee /etc/systemd/resolved.conf"
minikube ssh "sudo systemctl restart systemd-resolved --wait"
echo "Configured and restarted"
else
echo "Already configured"
fi
I wonder if this is something that'd make sense as default Minikube behaviour?
The above required me to restart the core-dns services in kube-system to get all to be happy again.
Also, the echo in the 'CONFIGURED' line requires a '-e'.
I also needed to set:
VBoxManage modifyvm "permanent" --natdnshostresolver1 on
on the VM. I think this may be related to dnsmasq running on the host.
I think this may be related to dnsmasq running on the host.
I'm using hyperkit instead of virtualbox, disabling dnsmasq and restarting minikube is what did the trick for me.
This is a known issue in Kubernetes, but we can do better here:
https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues
Kubernetes installs do not configure the nodes’ resolv.conf files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually.
Still an issue in v1.6.
No change yet - help wanted.
No change yet - help wanted.
Most helpful comment
Here's my solution that doesn't use the (now removed) kube-dns addon and doesn't use a hardcoded IP address.
Run this script from outside Minikube after every Minikube startup:
I wonder if this is something that'd make sense as default Minikube behaviour?