Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Minikube version (use minikube version): v0.18.0
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.18.0.isoWhat happened:
DNS doesn't work in a container when run within my minikube cluster. I can ping hosts by IP address so there is internet connectivity. Also, if I run the container outside the cluster, pinging by hostname works.
What you expected to happen:
DNS should work
How to reproduce it (as minimally and precisely as possible):
Run a container and try to ping a host on the Internet by name, e.g. google.com.
Anything else do we need to know:
Can't use the xhyve driver because of #1452 (so I haven't tested whether this affects clusters using the xhyve driver)
Can you post the output of 'minikube logs'? Can you also post the logs of the DNS pod?
Sure. Output of minikube logs is:
May 05 09:34:53 minikube localkube[3455]: :[]v1.AttachedVolume(nil)}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'NodeAllocatableEnforced' 'Updated Node Allocatable limit across pods'
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.069383 3455 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.075030 3455 status_manager.go:140] Starting to sync pod status with apiserver
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.075049 3455 kubelet.go:1741] Starting kubelet main sync loop.
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.075065 3455 kubelet.go:1752] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.075594 3455 volume_manager.go:248] Starting Kubelet Volume Manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.083641 3455 trace.go:61] Trace "Create /api/v1/namespaces/default/services" (started 2017-05-05 09:34:52.356162046 +0000 UTC):
May 05 09:34:53 minikube localkube[3455]: [10.899µs] [10.899µs] About to convert to expected version
May 05 09:34:53 minikube localkube[3455]: [55.218µs] [44.319µs] Conversion done
May 05 09:34:53 minikube localkube[3455]: [710.877271ms] [710.822053ms] About to store object in database
May 05 09:34:53 minikube localkube[3455]: [727.305455ms] [16.428184ms] Object stored in database
May 05 09:34:53 minikube localkube[3455]: [727.317913ms] [12.458µs] Self-link added
May 05 09:34:53 minikube localkube[3455]: "Create /api/v1/namespaces/default/services" [727.388487ms] [70.574µs] END
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.097612 3455 factory.go:309] Registering Docker factory
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.140938 3455 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.141199 3455 factory.go:89] Registering Rkt factory
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.141213 3455 factory.go:54] Registering systemd factory
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.141380 3455 factory.go:86] Registering Raw factory
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.141503 3455 manager.go:1106] Started watching for new ooms in manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.143933 3455 oomparser.go:185] oomparser using systemd
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.145242 3455 manager.go:288] Starting recovery of all containers
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.165689 3455 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"17aa312d-3176-11e7-86bd-080027f2fcd9", APIVersion:"v1", ResourceVersion:"20", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.179581 3455 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.214537 3455 manager.go:293] Recovery completed
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.223734 3455 rkt.go:56] starting detectRktContainers thread
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.231128 3455 kubelet_node_status.go:77] Attempting to register node minikube
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.251251 3455 kubelet_node_status.go:80] Successfully registered node minikube
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.253018 3455 kuberuntime_manager.go:902] updating runtime config through cri with podcidr
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.254483 3455 docker_service.go:277] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.256317 3455 kubelet_network.go:326] Setting Pod CIDR: 10.180.1.0/24 ->
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.256949 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/cluster-admin
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.272115 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:discovery
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.275219 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:basic-user
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.278817 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/admin
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.285460 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/edit
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.289499 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/view
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.290933 3455 controllermanager.go:437] Started "endpoint"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.291751 3455 controllermanager.go:437] Started "horizontalpodautoscaling"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.292120 3455 controllermanager.go:437] Started "statefuleset"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.292543 3455 controllermanager.go:437] Started "replicationcontroller"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.292863 3455 horizontal.go:139] Starting HPA Controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.292908 3455 stateful_set.go:144] Starting statefulset controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.292944 3455 replication_controller.go:150] Starting RC Manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.293380 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:heapster
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.298412 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:node
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.301705 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.303555 3455 controllermanager.go:437] Started "garbagecollector"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.304347 3455 controllermanager.go:437] Started "job"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.304649 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.305024 3455 controllermanager.go:437] Started "disruption"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.305418 3455 controllermanager.go:437] Started "podgc"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.306127 3455 controllermanager.go:437] Started "resourcequota"
May 05 09:34:53 minikube localkube[3455]: E0505 09:34:53.306615 3455 util.go:45] Metric for serviceaccount_controller already registered
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.306784 3455 controllermanager.go:437] Started "serviceaccount"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.307267 3455 controllermanager.go:437] Started "daemonset"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.307692 3455 controllermanager.go:437] Started "cronjob"
May 05 09:34:53 minikube localkube[3455]: E0505 09:34:53.308031 3455 certificates.go:38] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.308139 3455 controllermanager.go:434] Skipping "certificatesigningrequests"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.308402 3455 garbagecollector.go:111] Garbage Collector: Initializing
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.308687 3455 disruption.go:269] Starting disruption controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.308861 3455 resource_quota_controller.go:240] Starting resource quota controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.308967 3455 serviceaccounts_controller.go:122] Starting ServiceAccount controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.309071 3455 daemoncontroller.go:199] Starting Daemon Sets controller manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.309168 3455 cronjob_controller.go:95] Starting CronJob Manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.308045 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.378605 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.384442 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.387747 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.390386 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.393172 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.395648 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.398895 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.402033 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.404894 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.408719 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.409469 3455 garbagecollector.go:116] Garbage Collector: All resource monitors have synced. Proceeding to collect garbage
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.412584 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.415460 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.417426 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.420084 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.423785 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.426427 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.429045 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.431926 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.434585 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.437411 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.440344 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.442992 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.446118 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.448702 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.450843 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.454052 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.456954 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.459337 3455 storage_rbac.go:166] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.462700 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.465039 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.466752 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.470137 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:node
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.472805 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.475706 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.477836 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.480082 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.482435 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.485126 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.488038 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.490522 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.493883 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.496765 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.499128 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.502004 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.504870 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.513620 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.553467 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.593450 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.635287 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.674397 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.713079 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.753558 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.793514 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.812083 3455 controllermanager.go:437] Started "namespace"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.812198 3455 namespace_controller.go:189] Starting the NamespaceController
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.812978 3455 controllermanager.go:437] Started "deployment"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.813141 3455 deployment_controller.go:147] Starting deployment controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.813532 3455 controllermanager.go:437] Started "replicaset"
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.813607 3455 replica_set.go:155] Starting ReplicaSet controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.814132 3455 controllermanager.go:437] Started "ttl"
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.814247 3455 controllermanager.go:421] "bootstrapsigner" is disabled
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.814364 3455 controllermanager.go:421] "tokencleaner" is disabled
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.814463 3455 plugins.go:101] No cloud provider specified.
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.814602 3455 controllermanager.go:449] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.814731 3455 controllermanager.go:453] Unsuccessful parsing of service CIDR : invalid CIDR address:
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.814980 3455 nodecontroller.go:219] Sending events to api server.
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.814308 3455 ttlcontroller.go:117] Starting TTL controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.815291 3455 taint_controller.go:157] Sending events to api server.
May 05 09:34:53 minikube localkube[3455]: E0505 09:34:53.815786 3455 controllermanager.go:494] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.815895 3455 controllermanager.go:506] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.815983 3455 controllermanager.go:519] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.816988 3455 attach_detach_controller.go:223] Starting Attach Detach Controller
May 05 09:34:53 minikube localkube[3455]: E0505 09:34:53.824927 3455 actual_state_of_world.go:461] Failed to set statusUpdateNeeded to needed true because nodeName="minikube" does not exist
May 05 09:34:53 minikube localkube[3455]: E0505 09:34:53.825137 3455 actual_state_of_world.go:475] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="minikube" does not exist
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.832415 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.872916 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.911237 3455 disruption.go:277] Sending events to api server.
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.915446 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.916407 3455 nodecontroller.go:633] Initializing eviction metric for zone:
May 05 09:34:53 minikube localkube[3455]: W0505 09:34:53.916472 3455 nodecontroller.go:956] Missing timestamp for Node minikube. Assuming now as a timestamp.
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.916491 3455 nodecontroller.go:872] NodeController detected that zone is now in state Normal.
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.916602 3455 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"17bf047d-3176-11e7-86bd-080027f2fcd9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in NodeController
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.916641 3455 taint_controller.go:180] Starting NoExecuteTaintManager
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.953461 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
May 05 09:34:53 minikube localkube[3455]: I0505 09:34:53.993652 3455 storage_rbac.go:194] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.034030 3455 storage_rbac.go:225] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.074706 3455 storage_rbac.go:225] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.121214 3455 storage_rbac.go:225] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.153611 3455 storage_rbac.go:225] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.193221 3455 storage_rbac.go:255] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.235069 3455 storage_rbac.go:255] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
May 05 09:34:54 minikube localkube[3455]: I0505 09:34:54.276935 3455 storage_rbac.go:255] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
May 05 09:34:58 minikube localkube[3455]: I0505 09:34:58.116399 3455 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/4fb35b6f38517771d5bfb1cffb784d97-addons" (spec.Name: "addons") pod "4fb35b6f38517771d5bfb1cffb784d97" (UID: "4fb35b6f38517771d5bfb1cffb784d97")
May 05 09:34:58 minikube localkube[3455]: I0505 09:34:58.218363 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/4fb35b6f38517771d5bfb1cffb784d97-addons" (spec.Name: "addons") pod "4fb35b6f38517771d5bfb1cffb784d97" (UID: "4fb35b6f38517771d5bfb1cffb784d97").
May 05 09:34:58 minikube localkube[3455]: I0505 09:34:58.388046 3455 kuberuntime_manager.go:458] Container {Name:kube-addon-manager Image:gcr.io/google-containers/kube-addon-manager:v6.4-alpha.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:5 scale:-3} d:{Dec:<nil>} s:5m Format:DecimalSI} memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}]} VolumeMounts:[{Name:addons ReadOnly:true MountPath:/etc/kubernetes/ SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 05 09:36:23 minikube localkube[3455]: W0505 09:36:23.484470 3455 conversion.go:110] Could not get instant cpu stats: different number of cpus
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.004367 3455 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"621b67f2-3176-11e7-86bd-080027f2fcd9", APIVersion:"v1", ResourceVersion:"260", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-mmw9s
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.017566 3455 replication_controller.go:206] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.019338 3455 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-mmw9s", UID:"621c4f04-3176-11e7-86bd-080027f2fcd9", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kubernetes-dashboard-mmw9s to minikube
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.071594 3455 replication_controller.go:206] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.152958 3455 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kube-dns-v20", UID:"6231591f-3176-11e7-86bd-080027f2fcd9", APIVersion:"v1", ResourceVersion:"272", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-v20-c4h89
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.176105 3455 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/621c4f04-3176-11e7-86bd-080027f2fcd9-default-token-3zkzf" (spec.Name: "default-token-3zkzf") pod "621c4f04-3176-11e7-86bd-080027f2fcd9" (UID: "621c4f04-3176-11e7-86bd-080027f2fcd9")
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.186215 3455 replication_controller.go:206] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.186612 3455 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v20-c4h89", UID:"6232df82-3176-11e7-86bd-080027f2fcd9", APIVersion:"v1", ResourceVersion:"273", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-v20-c4h89 to minikube
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.212699 3455 replication_controller.go:206] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.277092 3455 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/6232df82-3176-11e7-86bd-080027f2fcd9-default-token-3zkzf" (spec.Name: "default-token-3zkzf") pod "6232df82-3176-11e7-86bd-080027f2fcd9" (UID: "6232df82-3176-11e7-86bd-080027f2fcd9")
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.286865 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/621c4f04-3176-11e7-86bd-080027f2fcd9-default-token-3zkzf" (spec.Name: "default-token-3zkzf") pod "621c4f04-3176-11e7-86bd-080027f2fcd9" (UID: "621c4f04-3176-11e7-86bd-080027f2fcd9").
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.367228 3455 kuberuntime_manager.go:458] Container {Name:kubernetes-dashboard Image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-3zkzf ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.398472 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/6232df82-3176-11e7-86bd-080027f2fcd9-default-token-3zkzf" (spec.Name: "default-token-3zkzf") pod "6232df82-3176-11e7-86bd-080027f2fcd9" (UID: "6232df82-3176-11e7-86bd-080027f2fcd9").
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.492166 3455 kuberuntime_manager.go:458] Container {Name:kubedns Image:gcr.io/google_containers/kubedns-amd64:1.9 Command:[] Args:[--domain=cluster.local. --dns-port=10053] WorkingDir: Ports:[{Name:dns-local HostPort:0 ContainerPort:10053 Protocol:UDP HostIP:} {Name:dns-tcp-local HostPort:0 ContainerPort:10053 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:default-token-3zkzf ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz-kubedns,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:8081,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.492266 3455 kuberuntime_manager.go:458] Container {Name:dnsmasq Image:gcr.io/google_containers/kube-dnsmasq-amd64:1.4 Command:[] Args:[--cache-size=1000 --no-resolv --server=127.0.0.1#10053 --log-facility=-] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-3zkzf ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz-dnsmasq,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 05 09:36:58 minikube localkube[3455]: I0505 09:36:58.492301 3455 kuberuntime_manager.go:458] Container {Name:healthz Image:gcr.io/google_containers/exechealthz-amd64:1.2 Command:[] Args:[--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null --url=/healthz-dnsmasq --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null --url=/healthz-kubedns --port=8080 --quiet] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:8080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}] Requests:map[memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI} cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:default-token-3zkzf ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 05 09:36:58 minikube localkube[3455]: W0505 09:36:58.507460 3455 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-mmw9s through plugin: invalid network status for
May 05 09:36:58 minikube localkube[3455]: W0505 09:36:58.645976 3455 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-v20-c4h89 through plugin: invalid network status for
May 05 09:36:58 minikube localkube[3455]: W0505 09:36:58.650886 3455 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-v20-c4h89 through plugin: invalid network status for
May 05 09:36:58 minikube localkube[3455]: W0505 09:36:58.653632 3455 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-mmw9s through plugin: invalid network status for
May 05 09:37:13 minikube localkube[3455]: W0505 09:37:13.585398 3455 conversion.go:110] Could not get instant cpu stats: different number of cpus
May 05 09:37:23 minikube localkube[3455]: W0505 09:37:23.610333 3455 conversion.go:110] Could not get instant cpu stats: different number of cpus
May 05 09:37:55 minikube localkube[3455]: W0505 09:37:55.978105 3455 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-mmw9s through plugin: invalid network status for
May 05 09:37:55 minikube localkube[3455]: I0505 09:37:55.997217 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/621c4f04-3176-11e7-86bd-080027f2fcd9-default-token-3zkzf" (spec.Name: "default-token-3zkzf") pod "621c4f04-3176-11e7-86bd-080027f2fcd9" (UID: "621c4f04-3176-11e7-86bd-080027f2fcd9").
May 05 09:37:56 minikube localkube[3455]: I0505 09:37:56.128130 3455 replication_controller.go:206] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
May 05 09:37:56 minikube localkube[3455]: I0505 09:37:56.996495 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/621c4f04-3176-11e7-86bd-080027f2fcd9-default-token-3zkzf" (spec.Name: "default-token-3zkzf") pod "621c4f04-3176-11e7-86bd-080027f2fcd9" (UID: "621c4f04-3176-11e7-86bd-080027f2fcd9").
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.569263 3455 event.go:217] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"user", UID:"89c87dde-3176-11e7-86bd-080027f2fcd9", APIVersion:"extensions", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set user-2095183306 to 1
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.578666 3455 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"user-2095183306", UID:"89c9684e-3176-11e7-86bd-080027f2fcd9", APIVersion:"extensions", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: user-2095183306-sc37t
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.587589 3455 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"user-2095183306-sc37t", UID:"89cae620-3176-11e7-86bd-080027f2fcd9", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned user-2095183306-sc37t to minikube
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.742152 3455 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/89cae620-3176-11e7-86bd-080027f2fcd9-local-volume" (spec.Name: "local-volume") pod "89cae620-3176-11e7-86bd-080027f2fcd9" (UID: "89cae620-3176-11e7-86bd-080027f2fcd9")
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.742242 3455 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/89cae620-3176-11e7-86bd-080027f2fcd9-default-token-q1wd5" (spec.Name: "default-token-q1wd5") pod "89cae620-3176-11e7-86bd-080027f2fcd9" (UID: "89cae620-3176-11e7-86bd-080027f2fcd9")
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.843797 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/89cae620-3176-11e7-86bd-080027f2fcd9-local-volume" (spec.Name: "local-volume") pod "89cae620-3176-11e7-86bd-080027f2fcd9" (UID: "89cae620-3176-11e7-86bd-080027f2fcd9").
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.853099 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/89cae620-3176-11e7-86bd-080027f2fcd9-default-token-q1wd5" (spec.Name: "default-token-q1wd5") pod "89cae620-3176-11e7-86bd-080027f2fcd9" (UID: "89cae620-3176-11e7-86bd-080027f2fcd9").
May 05 09:38:04 minikube localkube[3455]: I0505 09:38:04.894097 3455 kuberuntime_manager.go:458] Container {Name:user Image:user Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:8080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:DEBUG Value:True ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:local-volume ReadOnly:false MountPath:/app SubPath:user} {Name:default-token-q1wd5 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 05 09:38:05 minikube localkube[3455]: W0505 09:38:05.020533 3455 docker_sandbox.go:263] Couldn't find network status for default/user-2095183306-sc37t through plugin: invalid network status for
May 05 09:38:05 minikube localkube[3455]: W0505 09:38:05.027217 3455 docker_sandbox.go:263] Couldn't find network status for default/user-2095183306-sc37t through plugin: invalid network status for
May 05 09:38:06 minikube localkube[3455]: W0505 09:38:06.039787 3455 docker_sandbox.go:263] Couldn't find network status for default/user-2095183306-sc37t through plugin: invalid network status for
May 05 09:38:06 minikube localkube[3455]: I0505 09:38:06.051889 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/89cae620-3176-11e7-86bd-080027f2fcd9-default-token-q1wd5" (spec.Name: "default-token-q1wd5") pod "89cae620-3176-11e7-86bd-080027f2fcd9" (UID: "89cae620-3176-11e7-86bd-080027f2fcd9").
May 05 09:38:07 minikube localkube[3455]: I0505 09:38:07.058121 3455 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/89cae620-3176-11e7-86bd-080027f2fcd9-default-token-q1wd5" (spec.Name: "default-token-q1wd5") pod "89cae620-3176-11e7-86bd-080027f2fcd9" (UID: "89cae620-3176-11e7-86bd-080027f2fcd9").
May 05 09:38:55 minikube localkube[3455]: W0505 09:38:55.264432 3455 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-v20-c4h89 through plugin: invalid network status for
Output of kubedns (with kubectl logs --namespace=kube-system kube-dns-v20-c4h89 kubedns):
I0505 09:38:54.795827 1 dns.go:42] version: v1.6.0-alpha.0.680+3872cb93abf948-dirty
I0505 09:38:54.796458 1 server.go:107] Using https://10.0.0.1:443 for kubernetes master, kubernetes API: <nil>
I0505 09:38:54.801767 1 server.go:63] ConfigMap not configured, using values from command line flags
I0505 09:38:54.803907 1 server.go:113] FLAG: --alsologtostderr="false"
I0505 09:38:54.803946 1 server.go:113] FLAG: --config-map=""
I0505 09:38:54.803958 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0505 09:38:54.803968 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0505 09:38:54.803976 1 server.go:113] FLAG: --dns-port="10053"
I0505 09:38:54.803986 1 server.go:113] FLAG: --domain="cluster.local."
I0505 09:38:54.804000 1 server.go:113] FLAG: --federations=""
I0505 09:38:54.804009 1 server.go:113] FLAG: --healthz-port="8081"
I0505 09:38:54.804051 1 server.go:113] FLAG: --kube-master-url=""
I0505 09:38:54.804103 1 server.go:113] FLAG: --kubecfg-file=""
I0505 09:38:54.804159 1 server.go:113] FLAG: --log-backtrace-at=":0"
I0505 09:38:54.804187 1 server.go:113] FLAG: --log-dir=""
I0505 09:38:54.804214 1 server.go:113] FLAG: --log-flush-frequency="5s"
I0505 09:38:54.804239 1 server.go:113] FLAG: --logtostderr="true"
I0505 09:38:54.804250 1 server.go:113] FLAG: --stderrthreshold="2"
I0505 09:38:54.804258 1 server.go:113] FLAG: --v="0"
I0505 09:38:54.804266 1 server.go:113] FLAG: --version="false"
I0505 09:38:54.804276 1 server.go:113] FLAG: --vmodule=""
I0505 09:38:54.804555 1 server.go:155] Starting SkyDNS server (0.0.0.0:10053)
I0505 09:38:54.804682 1 server.go:167] Skydns metrics not enabled
I0505 09:38:54.805261 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0505 09:38:54.805279 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0505 09:38:54.832160 1 server.go:126] Setting up Healthz Handler (/readiness)
I0505 09:38:54.832174 1 server.go:131] Setting up cache handler (/cache)
I0505 09:38:54.832179 1 server.go:120] Status HTTP port 8081
kubectl logs --namespace=kube-system kube-dns-v20-c4h89 healthz returns no output.
kubectl logs --namespace=kube-system kube-dns-v20-c4h89 dnsmasq returns:
dnsmasq[1]: started, version 2.76 cachesize 1000
dnsmasq[1]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
dnsmasq[1]: using nameserver 127.0.0.1#10053
dnsmasq[1]: read /etc/hosts - 7 addresses
@boosh I had same problem with Vbox driver. After few hours of test and debug, I found out that the DNS was not well managed by Vbox and was causing that error. I am using minikube version: v0.19.0
What I did to fix the issue was that : activate natdnshostresolver1 on the host.
First stop the VM minikube and type the following command :
VBoxManage modifyvm "VM name" --natdnshostresolver1 on
After, you can start the VM and it should be working
Source for info: https://forums.virtualbox.org/viewtopic.php?f=7&t=50368
Hope it will help you. Good luck
Thank you @ygotame. I am new to minikube v0.19.0 and attempted the tutorial to minikube start and minikube dashboard on windows 7 with VirtualBox 5.1.22. Even after repeated delete and start, I did not get it the dashboard to work. I stumbled upon your solution and that worked for me.
I have to trace back from the different searches and issues to realize it was something to do with kube-dns and fix for the VM.
Holy sh**t @ygotame , that worked!!!
Spent a day trying to figure out what was wrong on my mac and this worked.
I now see all services created:
VBoxManage modifyvm "minikube" --natdnshostresolver1 on
minikube start
kubectl cluster-info
Kubernetes master is running at https://192.168.99.104:8443
KubeDNS is running at https://192.168.99.104:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://192.168.99.104:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
I just encountered the same issue running:
The VBoxManage command listed above expressly remedied the problem.
I am getting the exact same error as @boosh and with @ygotame and @shavo007 's info it's still not running guys :/ I'm about to throw the laptop out the window. I'm running:
I have done minikube delete, removed the .minikube folder, started, stopped, ran the VboxManage command, started, ran minikube dashboard (ran into timeout) and here are the final logs:
What other logs should I provide? Update: after reinstalling all 3 (kubectl, minikube and virtualbox) things still don't work.
I found that the addon manager was trying to assign the kube-dns service to an in-use clusterIp. I deleted the offending service (kubernetes-dashboard) and the kube-dns service came up and then the kubernetes-dashboard service was automatically recreated on a different clusterIp.
I am still having this issue, and the above VMBoxManage command did NOT resolve it.
minikube version 0.22.2
kubernetes version 1.7.5
For reference I have just had the same issue, but not using virtualbox, but xhyve.
There is no dns-service in kube-system so pods were dropping because they couldn't resolve to k8s itself. Also dashboard loads but instantly reloads saying it can't connect.
minikube version: v0.24.1
Client Version: v1.8.4
Server Version: v1.7.5
Example image of dashboard error. (I am assuming here that this is because it relies on some dns)

A note for anyone else - sort of fixes by:
minikube stop
minikube addons enable kube-dns
minikube start
Even though kube-dns was already enabled in minikube addons list
This is my work around.
https://github.com/kubernetes/minikube/issues/1674#issuecomment-354391917
For now a simple workaround (tested with Hyper-V, but should work for OSX, Linux), try the following:
minikube addons enable kube-dnsminikube ssh/etc/resolv.conf (nameserver 8.8.8.8 on a new line) -> vi or another editor : sudo vi /etc/resolv.conf/etc/resolv.conf file: :x or :wq for viping google.ca while still in a Minikube SSH session, and if that works, you're done...I am facing same issue with --vm-driver=none where a Pod fails to establish connection with the server running on web. Exact error string: Get https://<the-server-name>.io: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
I am waiting for any solution, suggestion, workaround.
I tried deploying CoreDNS but it didn't help.
minikube version: v0.24.0
kubectl version: Client: 1.9.3 Server: 1.8.0
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
We have also experienced issue with DNS on minikube where we had to
override resolv.conf's contents. We were able to pinpoint this to an issue
with the iso between two versions. I suggest to reopen if people still
experience this.
On Monday, June 18, 2018, fejta-bot notifications@github.com wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.<
https://ci5.googleusercontent.com/proxy/ny2IgFlQzUmeDsgWc8rysfm-3j4WbqZ0OORHJpfg7x5uss3f947Nj_Gtri-kG5D4dkL-ggF5JqkzbdzVKO3HoE1VkM6K7C_Tyrq75PjLf8fVzDjPof62F9W9benhkGEwL2l567XRFSnq6Lsu49O4-LwOipbL9A=s0-d-e1-ft#https://github.com/notifications/beacon/AAAHZtM0SqtXd3q6Qn-dGBsoGHz-lCJKks5t936RgaJpZM4NQ2-P.gif
--
Gerard Braad | http://gbraad.nl
[ Doing Open Source Matters ]
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
I am facing the same issue as @a4abhishek with minikube 0.25.2 and --vm-driver=none. If anybody has made advances on this please let us know.
/remove-lifecycle rotten
Closing, as the original bug is based on a minikube version that is over a year old.
If you have issues with other drivers, please open them as a new bug, as they are likely quite different than what the original virtualbox user encountered. Thanks!
Most helpful comment
@boosh I had same problem with Vbox driver. After few hours of test and debug, I found out that the DNS was not well managed by Vbox and was causing that error. I am using minikube version: v0.19.0
What I did to fix the issue was that : activate natdnshostresolver1 on the host.
First stop the VM minikube and type the following command :
VBoxManage modifyvm "VM name" --natdnshostresolver1 onAfter, you can start the VM and it should be working
Source for info: https://forums.virtualbox.org/viewtopic.php?f=7&t=50368
Hope it will help you. Good luck