Minikube: ln: failed to create symbolic link '/var/lib/minikube/etcd/minikube': File exists

Created on 23 Sep 2019  ยท  15Comments  ยท  Source: kubernetes/minikube

Subsequent error on minikube restart:

  minikube v1.4.0 on Darwin 10.14.3
๐Ÿ’ก  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
๐Ÿƒ  Using the running virtualbox "minikube" VM ...
โŒ›  Waiting for the host to be provisioned ...
๐Ÿณ  Preparing Kubernetes v1.16.0 on Docker 18.09.8 ...
๐Ÿ”„  Relaunching Kubernetes using kubeadm ...
E0920 14:15:18.905571   77264 kubeadm.go:415] failed to create compat symlinks: cmd failed: sudo ln -s /data/minikube /var/lib/minikube/etcd
ln: failed to create symbolic link '/var/lib/minikube/etcd/minikube': File exists

: Process exited with status 1

_Originally posted by @vsethi in https://github.com/kubernetes/minikube/issues/5415#issuecomment-533848677_

kinbug lifecyclstale prioritbacklog

Most helpful comment

Hit this issue today. Let me know if there is information that would be helpful to include.

@seancarroll @dbamaster I am curious, does this issue happen consistently or only happens sometimes ?

if this issue is still happening for you, I wonder if deleting the ISO cache would solve your problem ?

rm ~/.minikube/cache/iso/*

or if minikube delete and starting it fresh would solve the problem ?

All 15 comments

My suspicion is that this regression may be tied only to restart on older ISO's.

Hit this issue today. Let me know if there is information that would be helpful to include.

Hit this issue today. Let me know if there is information that would be helpful to include.

@seancarroll @dbamaster I am curious, does this issue happen consistently or only happens sometimes ?

if this issue is still happening for you, I wonder if deleting the ISO cache would solve your problem ?

rm ~/.minikube/cache/iso/*

or if minikube delete and starting it fresh would solve the problem ?

Since I first encountered it its been consistent. I deleted minikube but didnt help so I tried to reinstalled virtualbox as well but that too hasnt resolved the issue for me.

Interesting. As far as I knew, this issue should only happen on existing VM's with older ISO's, so minikube delete should have solved the issue.

Hit this issue today. Let me know if there is information that would be helpful to include.

@seancarroll @dbamaster I am curious, does this issue happen consistently or only happens sometimes ?

if this issue is still happening for you, I wonder if deleting the ISO cache would solve your problem ?

rm ~/.minikube/cache/iso/*

or if minikube delete and starting it fresh would solve the problem ?

It happened after the last update.
Deleting minikube is not an option for me. I have a few deployments there, unfortunately I didn't save the actual file :(

got error
๐Ÿ’ฃ Error restarting cluster: addon phase: command failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml
stdout:
stderr: error execution phase addon/coredns: unable to fetch CoreDNS current installed version and ConfigMap.: multiple DNS addon deployments found: [{{ } {coredns kube-system /apis/apps/v1/namespaces/kube-system/deployments/coredns c0600e1f-a2ac-41ad-9fd3-2c40d6e8d0d9 2098 1 2019-10-26 13:15:06 +0000 UTC map[k8s-app:kube-dns] map[deployment.kubernetes.io/revision:1] [] [] []} {0xc0006a796c &LabelSelector{MatchLabels:map[string]string{k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},} {{ 0 0001-01-01 00:00:00 +0000 UTC map[k8s-app:kube-dns] map[] [] [] []} {[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[]KeyToPath{KeyToPath{Key:Corefile,Path:Corefile,Mode:nil,},},DefaultMode:420,Optional:nil,} nil nil nil nil nil nil nil nil nil}}] [] [{coredns k8s.gcr.io/coredns:1.6.2 [] [-conf /etc/coredns/Corefile] [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{config-volume true /etc/coredns }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:true,AllowPrivilegeEscalation:false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0006a7de0 Default map[beta.kubernetes.io/os:linux] coredns coredns false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule }] [] system-cluster-critical nil [] map[] []}} {RollingUpdate &RollingUpdateDeployment{MaxUnavailable:1,MaxSurge:25%,}} 0 0xc0006a7ec0 false 0xc0006a7ec4} {1 2 2 0 0 2 [{Available False 2019-10-26 13:15:29 +0000 UTC 2019-10-26 13:15:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing False 2019-10-26 13:25:30 +0000 UTC 2019-10-26 13:25:30 +0000 UTC ProgressDeadlineExceeded ReplicaSet "coredns-5644d7b6d9" has timed out progressing.}] }} {{ } {kube-dns kube-system /apis/apps/v1/namespaces/kube-system/deployments/kube-dns 6c669dd9-b474-11e8-b560-080027dd2040 995 1 2018-09-09 21:07:56 +0000 UTC map[k8s-app:kube-dns] map[deployment.kubernetes.io/revision:1] [] [] []} {0xc0006a7fd8 &LabelSelector{MatchLabels:map[string]string{k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},} {{ 0 0001-01-01 00:00:00 +0000 UTC map[k8s-app:kube-dns] map[] [] [] []} {[{kube-dns-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:kube-dns,},Items:[]KeyToPath{},DefaultMode:420,Optional:*true,} nil nil nil nil nil nil nil nil nil}}] [] [{kubedns k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 [] [--domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2] [{dns-local 0 10053 UDP } {dns-tcp-local 0 10053 TCP } {metrics 0 10055 TCP }] [] [{PROMETHEUS_PORT 10055 nil}] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{kube-dns-config false /kube-dns-config }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/kubedns,Port:{0 10054 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent nil false false false} {dnsmasq k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 [] [-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] [{dns 0 53 UDP } {dns-tcp 0 53 TCP }] [] [] {map[] map[cpu:{{150 -3} {} 150m DecimalSI} memory:{{20971520 0} {} 20Mi BinarySI}]} [{kube-dns-config false /etc/k8s/dns/dnsmasq-nanny }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:{0 10054 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil nil /dev/termination-log File IfNotPresent nil false false false} {sidecar k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 [] [--v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV] [{metrics 0 10054 TCP }] [] [] {map[] map[cpu:{{10 -3} {} 10m DecimalSI} memory:{{20971520 0} {} 20Mi BinarySI}]} [] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{0 10054 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000718350 Default map[] kube-dns kube-dns false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] &Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:beta.kubernetes.io/arch,Operator:In,Values:[amd64],},},MatchFields:[]NodeSelectorRequirement{},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,} default-scheduler [{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule }] [] nil [] map[] []}} {RollingUpdate &RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:10%,}} 0 0xc0007183f0 false 0xc0007183f4} {1 1 1 0 0 1 [{Progressing True 2018-09-09 21:08:45 +0000 UTC 2018-09-09 21:08:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "kube-dns-86f4d74b45" has successfully progressed.} {Available False 2019-10-26 13:15:29 +0000 UTC 2019-10-26 13:15:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}] }}]
To see the stack trace of this error execute with --v=5 or higher
: Process exited with status 1

rm ~/.minikube/cache/iso/*

This didnt helped in my case. However minikube delete did work. :tada:

Deleting minikube is not an option for me. I have a few deployments there, unfortunately I didn't save the actual file :(

@dbamaster what about temporary downgrading and doing: kubectl get deployment FOO -o yaml > my-deployment-foo.yml to save your deployments

rm ~/.minikube/cache/iso/*

This didnt helped in my case. However minikube delete did work. ๐ŸŽ‰

Deleting minikube is not an option for me. I have a few deployments there, unfortunately I didn't save the actual file :(

@dbamaster what about temporary downgrading and doing: kubectl get deployment FOO -o yaml > my-deployment-foo.yml to save your deployments

I did that thank you, I will try to re-install Minikube and see how it goes.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@seancarroll @WtfJoke @dbamaster @tstromberg is this still an issue ? I have not see this issue in a while, could it have been fixed by the latest version ?

@medyagh I only experienced this error once and since then I havent worked much with k8/minikube anymore. So I cant tell if its really fixed, but will let you know if I experience it again

@seancarroll @WtfJoke @dbamaster @tstromberg is this still an issue ? I have not see this issue in a while, could it have been fixed by the latest version ?

Hey @medyagh thanks for the follow-up, I will check and update.
It is a long time since I uninstalled minikube from my laptop.

Regards,

FWIW, I had this problem today and minikube delete solved it for me.

I had this problem after upgrading mac OS to catalina, andminikube deletedid solve the issue

Was this page helpful?
5 / 5 - 1 ratings