BUG REPORT:
Minikube version (use minikube version): v0.30.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName): Virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.30.0What happened: After default installation (no specific arguments passed), minikube has both kube-dns and coredns deployments with one kube-dns service pointing on the 2 implementations
What you expected to happen: After default installation, minikube addons list indicates coredns is enabled and kube-dns is disabled. I expect to have kube-dns service that targets only coredns pod
How to reproduce it (as minimally and precisely as possible):
$ minikube start
[...]
$ kubectl -n kube-system get deploy | grep dns
coredns 1 1 1 1 8h
kube-dns 1 1 1 1 8h
Anything else do we need to know: Everything seems to work fine with 2 implementations at the same time. If I delete kube-dns deployment, it seems to still work (not yet extensively tested BTW)
Found the same here. The two are load balanced by the kube-dns service.
By matching queries with logs I found out that only coredns provides authoritative answers, while kube-dns only gives non-authoritative ones.
Sorry but I forgot a very important information which can explain the issue (but not justify it)
minikube is started with --kubernetes-version=v1.10.8
As a result :
sudo /usr/bin/kubeadm alpha phase addon {{ .DNSAddon }} with DNSAddon='kube-dns' (DNSAddon='coredns' only if kubernetes version is >= 1.12.0) => kube-dns is installedThis is in fact causing trouble for me, because coredns and kube-dns produce different results for SRV queries. One returns SRV records that point to A records that contain 'hashed' IP's, while the other returns SRV records that point to A records that contain 'dashed' IP's. Example at https://github.com/akka/akka-management/issues/344#issuecomment-429762710
Either is fine for me, but randomly seeing both in the same cluster leads to trouble.
This behavior definitely seems unwanted, and I'd be happy to review any PR's which you think may address this. The supposed default on a fresh install is to not install kube-dns:
$ minikube addons list | grep dns
- coredns: enabled
- kube-dns: disabled
However, the result with minikube start in v0.30.0 is a k8s v1.10.0 cluster that does in fact have both DNS services:
$ kubectl get pods --all-namespaces | grep dns
kube-system coredns-c4cffd6dc-4zdkf 1/1 Running 0 1m
kube-system kube-dns-86f4d74b45-ddc5g 3/3 Running 0 1m
@dlorenc - any insight into what might be going on here?
There was this PR to address a similar issue, but has gone stale .
The supposed default on a fresh install is to not install kube-dns:
$ minikube addons list | grep dns - coredns: enabled - kube-dns: disabled
As a workaround, you can disable kube-dns by going to /etc/kubernetes/addons and deleting the kube-dns manifests
@rajansandeep I have no such directory. Can you post the name of the manifests so I can figure out where they're stored on my system.
@tstromberg I think there is a conflict between kubeadm addon and minikube addon for DNS management.
sudo /usr/bin/kubeadm alpha phase addon {{ .DNSAddon }} which is executed right after kubeadm init (in pkg/minikube/bootstrapper/kubeadm/templates.go) takes care of DNS installation. But later on, minikube addons are also deployed, and there are DNS addons in the list (minikube addons list | grep dns)...
Why are we seing this only now ? Because the switch from kube-dns to coredns did not happen at the same time between kubeadm addon and minikube addon :)
A I explained previously,
kubeadm DNS addon is coredns starting from K8S 1.12, otherwise it is kube-dns
minikube DNS addon is coredns starting from minikube 0.29, otherwise it is kube-dns
For example,
minikube 0.28 installing K8S 1.10 will only install kube-dns
minikube 0.30 installing K8S 1.12 will only install coredns
But minikube 0.30 installing K8S 1.10 will install both :-(
I have no such directory. Can you post the name of the manifests so I can figure out where they're stored on my system.
@redshirtrob Sorry, I should have been more specific.
You'll find the directory after you ssh into minikube.
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cd /etc/kubernetes/addons/
$ ls
coreDNS-clusterrole.yaml coreDNS-configmap.yaml coreDNS-controller.yaml coreDNS-crbinding.yaml coreDNS-sa.yaml coreDNS-svc.yaml dashboard-dp.yaml dashboard-svc.yaml storage-provisioner.yaml storageclass.yaml
$
(In my case above, I only have CoreDNS installed)
Closing the loop here a bit: I don't have any yaml files in /etc/kubernetes/addons/ when this happens. I've had to manually delete the kube-dns deployment after starting Minikube.
Even though kube-dns was disabled in the config, it was still running.
This is in fact causing trouble for me, because coredns and kube-dns produce different results for SRV queries. One returns SRV records that point to A records that contain 'hashed' IP's, while the other returns SRV records that point to A records that contain 'dashed' IP's. Example at akka/akka-management#344 (comment)
Either is fine for me, but randomly seeing both in the same cluster leads to trouble.
Confirmed for me; deleting the kube-dns deployment solved the problem.
kubectl delete deployment kube-dns --namespace kube-system
I believe this is fixed in master by way of #3332 - and will be included in the next release.
Great!
Someone should document that this requires minkube start --kubernetes-version=1.12.x or later, since the default is still 1.10.0
Fixed in v0.32
Most helpful comment
I believe this is fixed in master by way of #3332 - and will be included in the next release.