Minikube: Minicube node NotReady after install (--container-runtime=rkt)

Created on 6 May 2017  路  13Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Last known working minikube version
0.17.1

Minikube version
0.18.0, 0.19.0

Environment:

  • OS:"Debian GNU/Linux 8 (jessie)
  • VM Driver: virtualbox
  • ISO version: v0.18.0,v0.19.0

What happened:

After installing the minicube it`s stuck on NotReady and no pods can be started.

What you expected to happen:

Minicube working and pods to be started.

How to reproduce it (as minimally and precisely as possible):

$ minikube start --container-runtime=rkt
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
$ kubectl get node
NAME       STATUS     AGE       VERSION
minikube   NotReady   11m        v1.6.0

Anything else do we need to know:

$ free -m
             total       used       free     shared    buffers     cached
Mem:         15966      14510       1455        871       1676       6646
-/+ buffers/cache:       6187       9778
Swap:        34331          0      34331
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/dm-2       260G  204G   55G  80% /
udev             10M     0   10M   0% /dev
tmpfs           3.2G  9.8M  3.2G   1% /run
tmpfs           7.8G  486M  7.4G   7% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2        92M   37M   56M  40% /boot
/dev/sda1       197M  471K  197M   1% /boot/efi
tmpfs           1.6G  4.0K  1.6G   1% /run/user/119
tmpfs           1.6G   16K  1.6G   1% /run/user/1000
$ kubectl describe nodes
Name:           minikube
Role:           
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=minikube
Annotations:        node.alpha.kubernetes.io/ttl=0
            volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:         <none>
CreationTimestamp:  Sat, 06 May 2017 08:16:21 +0200
Phase:          
Conditions:
  Type          Status  LastHeartbeatTime           LastTransitionTime          Reason              Message
  ----          ------  -----------------           ------------------          ------              -------
  OutOfDisk         False   Sat, 06 May 2017 08:16:21 +0200     Sat, 06 May 2017 08:16:21 +0200     KubeletHasSufficientDisk    kubelet has sufficient disk space available
  MemoryPressure    False   Sat, 06 May 2017 08:16:21 +0200     Sat, 06 May 2017 08:16:21 +0200     KubeletHasSufficientMemory  kubelet has sufficient memory available
  DiskPressure      False   Sat, 06 May 2017 08:16:21 +0200     Sat, 06 May 2017 08:16:21 +0200     KubeletHasNoDiskPressure    kubelet has no disk pressure
  Ready         False   Sat, 06 May 2017 08:16:21 +0200     Sat, 06 May 2017 08:16:21 +0200     KubeletNotReady         container runtime is down
Addresses:      192.168.99.102,192.168.99.102,minikube
Capacity:
 cpu:       2
 memory:    2048492Ki
 pods:      110
Allocatable:
 cpu:       2
 memory:    1946092Ki
 pods:      110
System Info:
 Machine ID:            6306153cb6724706aaeb7017391d9590
 System UUID:           6DB6D693-0D08-49B3-87E4-4F475C553C72
 Boot ID:           92925a85-3b91-4922-bb67-45bceaefa16c
 Kernel Version:        4.7.2
 OS Image:          Buildroot 2016.08
 Operating System:      linux
 Architecture:          amd64
 Container Runtime Version: rkt://1.24.0
 Kubelet Version:       v1.6.0
 Kube-Proxy Version:        v1.6.0
ExternalID:         minikube
Non-terminated Pods:        (0 in total)
  Namespace         Name        CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------         ----        ------------    ----------  --------------- -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests Memory Limits
  ------------  ----------  --------------- -------------
  0 (0%)    0 (0%)      0 (0%)      0 (0%)
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  12s       12s     1   kube-proxy, minikube            Normal      Starting        Starting kube-proxy.
  10s       10s     1   kubelet, minikube           Normal      Starting        Starting kubelet.
  10s       10s     1   kubelet, minikube           Warning     ImageGCFailed       unable to find data for container /
  10s       10s     2   kubelet, minikube           Normal      NodeHasSufficientDisk   Node minikube status is now: NodeHasSufficientDisk
  10s       10s     2   kubelet, minikube           Normal      NodeHasSufficientMemory Node minikube status is now: NodeHasSufficientMemory
  10s       10s     2   kubelet, minikube           Normal      NodeHasNoDiskPressure   Node minikube status is now: NodeHasNoDiskPressure
kinbug lifecyclrotten

Most helpful comment

For anyone else searching for the same . Minikube version 0.17.1 works with rkt container engine.

All 13 comments

It seems to start ok, if the rkt is not used

$ minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
$ kubectl get node
NAME       STATUS    AGE       VERSION
minikube   Ready     24m       v1.6.0

I have similar kind of issue open on Ticket# 1401

Indeed it could be the same. Only this one already fails with the rkt setting only.

For anyone else searching for the same . Minikube version 0.17.1 works with rkt container engine.

Same here.

@basicsaki Indeed 0.17.1 works with rkt, thanks for info

~Doesn't work for me on Version 0.17.1. I am on OSx~

Edit: Works for me aswell on 0.17.1. Just required to wait for few minutes

$ minikube version
minikube version: v0.17.1
$ kubectl get nodes
NAME       STATUS    AGE       VERSION
minikube   Ready     5m        v1.5.3



md5-a04b5da1df7368ea248f93f2f6ae1955



$ minikube logs

-- Logs begin at Sun 2017-05-14 11:42:00 UTC, end at Sun 2017-05-14 11:46:27 UTC. --
May 14 11:44:34 minikube systemd[1]: Starting Localkube...
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.694038    3277 start.go:77] Feature gates:%!(EXTRA string=)
May 14 11:44:34 minikube localkube[3277]: localkube host ip address: 10.0.2.15
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.704036    3277 server.go:215] Using iptables Proxier.
May 14 11:44:34 minikube localkube[3277]: W0514 11:44:34.704902    3277 server.go:468] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/minikube: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: W0514 11:44:34.705165    3277 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
May 14 11:44:34 minikube localkube[3277]: W0514 11:44:34.705303    3277 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.705593    3277 server.go:227] Tearing down userspace rules.
May 14 11:44:34 minikube localkube[3277]: Starting etcd...
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.738977    3277 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.739283    3277 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: name = kubeetcd
May 14 11:44:34 minikube localkube[3277]: data dir = /var/lib/localkube/etcd
May 14 11:44:34 minikube localkube[3277]: member dir = /var/lib/localkube/etcd/member
May 14 11:44:34 minikube localkube[3277]: heartbeat = 100ms
May 14 11:44:34 minikube localkube[3277]: election = 1000ms
May 14 11:44:34 minikube localkube[3277]: snapshot count = 10000
May 14 11:44:34 minikube localkube[3277]: advertise client URLs = http://0.0.0.0:2379
May 14 11:44:34 minikube localkube[3277]: initial advertise peer URLs = http://0.0.0.0:2380
May 14 11:44:34 minikube localkube[3277]: initial cluster = kubeetcd=http://0.0.0.0:2380
May 14 11:44:34 minikube localkube[3277]: starting member fcf2ad36debdd5bb in cluster 7f055ae3b0912328
May 14 11:44:34 minikube localkube[3277]: fcf2ad36debdd5bb became follower at term 0
May 14 11:44:34 minikube localkube[3277]: newRaft fcf2ad36debdd5bb [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
May 14 11:44:34 minikube localkube[3277]: fcf2ad36debdd5bb became follower at term 1
May 14 11:44:34 minikube localkube[3277]: starting server... [version: 3.0.14, cluster version: to_be_decided]
May 14 11:44:34 minikube localkube[3277]: Starting apiserver...
May 14 11:44:34 minikube localkube[3277]: Starting controller-manager...
May 14 11:44:34 minikube localkube[3277]: Starting scheduler...
May 14 11:44:34 minikube localkube[3277]: Starting kubelet...
May 14 11:44:34 minikube localkube[3277]: Starting proxy...
May 14 11:44:34 minikube localkube[3277]: Starting storage-provisioner...
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.757325    3277 config.go:527] Will report 10.0.2.15 as public IP address.
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.758166    3277 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.759151    3277 conntrack.go:66] Setting conntrack hashsize to 32768
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.765672    3277 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.769978    3277 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.771183    3277 controllermanager.go:125] unable to register configz: register config "componentconfig" twice
May 14 11:44:34 minikube localkube[3277]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.775883    3277 server.go:78] unable to register configz: register config "componentconfig" twice
May 14 11:44:34 minikube localkube[3277]: I0514 11:44:34.792406    3277 feature_gate.go:189] feature gates: map[]
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.792907    3277 server.go:297] unable to register configz: register config "componentconfig" twice
May 14 11:44:34 minikube localkube[3277]: W0514 11:44:34.793158    3277 server.go:605] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.796438    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.792916    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:75: Failed to list *storage.StorageClass: Get http://127.0.0.1:8080/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.797058    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.797135    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.792677    3277 leaderelection.go:228] error retrieving resource lock kube-system/kube-controller-manager: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.813102    3277 leaderelection.go:228] error retrieving resource lock kube-system/kube-scheduler: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.813407    3277 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.813872    3277 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.814084    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.814573    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.815041    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.815258    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.815624    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.815930    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.816239    3277 reflector.go:199] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: added member fcf2ad36debdd5bb [http://0.0.0.0:2380] to cluster 7f055ae3b0912328
May 14 11:44:34 minikube localkube[3277]: apply entries took too long [56.866151ms for 1 entries]
May 14 11:44:34 minikube localkube[3277]: avoid queries with large range/delete range!
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.950832    3277 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:34 minikube localkube[3277]: E0514 11:44:34.951215    3277 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:35 minikube localkube[3277]: [restful] 2017/05/14 11:44:35 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi/
May 14 11:44:35 minikube localkube[3277]: [restful] 2017/05/14 11:44:35 log.go:30: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
May 14 11:44:35 minikube localkube[3277]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:35 minikube localkube[3277]: fcf2ad36debdd5bb is starting a new election at term 1
May 14 11:44:35 minikube localkube[3277]: fcf2ad36debdd5bb became candidate at term 2
May 14 11:44:35 minikube localkube[3277]: fcf2ad36debdd5bb received vote from fcf2ad36debdd5bb at term 2
May 14 11:44:35 minikube localkube[3277]: fcf2ad36debdd5bb became leader at term 2
May 14 11:44:35 minikube localkube[3277]: raft.node: fcf2ad36debdd5bb elected leader fcf2ad36debdd5bb at term 2
May 14 11:44:35 minikube localkube[3277]: published {Name:kubeetcd ClientURLs:[http://0.0.0.0:2379]} to cluster 7f055ae3b0912328
May 14 11:44:35 minikube localkube[3277]: setting up the initial cluster version to 3.0
May 14 11:44:35 minikube localkube[3277]: set the initial cluster version to 3.0
May 14 11:44:35 minikube localkube[3277]: enabled capabilities for version 3.0
May 14 11:44:35 minikube localkube[3277]: storage-provisioner: Exit with error: Error getting server version: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.279262    3277 serve.go:88] Serving securely on 0.0.0.0:8443
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.279636    3277 serve.go:102] Serving insecurely on 127.0.0.1:8080
May 14 11:44:35 minikube systemd[1]: Started Localkube.
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.298924    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/cluster-admin
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.303029    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:discovery
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.308001    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:basic-user
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.316633    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/admin
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.326773    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/edit
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.335505    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/view
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.343776    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:node
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.348284    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:node-proxier
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.355009    3277 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.363615    3277 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.369219    3277 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.373748    3277 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.377840    3277 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:node
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.382162    3277 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.386288    3277 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.449990    3277 controller.go:262] Starting provisioner controller b403e23a-389a-11e7-99b5-080027237788!
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.627638    3277 manager.go:143] cAdvisor running in container: "/system.slice/localkube.service"
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.637875    3277 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.640534    3277 manager.go:198] Machine: {NumCores:2 CpuFrequency:2294770 MemoryCapacity:2097786880 MachineID:ce5d48416f0a46dda01417d0c9c65371 SystemUUID:FC19CE28-5DCA-4FA8-8276-CC7D8F9581DB BootID:4a6b642c-032d-4b7d-8935-031fc0d8f569 Filesystems:[{Device:/dev/sda1 Capacity:19163156480 Type:vfs Inodes:2434064 HasInodes:true} {Device:rootfs Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:23:77:88 Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:6e:53:0f Speed:1000 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097786880 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:3145728 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:3145728 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.641436    3277 manager.go:204] Version: {KernelVersion:4.7.2 ContainerOsVersion:Buildroot 2016.08 DockerVersion:1.11.1 CadvisorVersion: CadvisorRevision:}
May 14 11:44:35 minikube localkube[3277]: W0514 11:44:35.648648    3277 container_manager_linux.go:205] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.649097    3277 kubelet.go:242] Adding manifest file: /etc/kubernetes/manifests
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.649129    3277 kubelet.go:252] Watching apiserver
May 14 11:44:35 minikube localkube[3277]: W0514 11:44:35.660307    3277 kubelet_network.go:62] Hairpin mode set to "promiscuous-bridge" but container runtime is "rkt", ignoring
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.660978    3277 kubelet.go:477] Hairpin mode set to "none"
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.685894    3277 kubelet_network.go:226] Setting Pod CIDR:  -> 10.180.1.0/24
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.686704    3277 server.go:770] Started kubelet v1.5.3
May 14 11:44:35 minikube localkube[3277]: E0514 11:44:35.688572    3277 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.690092    3277 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.691287    3277 server.go:123] Starting to listen on 0.0.0.0:10250
May 14 11:44:35 minikube localkube[3277]: E0514 11:44:35.691746    3277 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
May 14 11:44:35 minikube localkube[3277]: E0514 11:44:35.691840    3277 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.692428    3277 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.692484    3277 status_manager.go:129] Starting to sync pod status with apiserver
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.692500    3277 kubelet.go:1714] Starting kubelet main sync loop.
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.692510    3277 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.694101    3277 volume_manager.go:242] Starting Kubelet Volume Manager
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.727261    3277 factory.go:295] Registering Docker factory
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.734205    3277 factory.go:89] Registering Rkt factory
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.734281    3277 factory.go:54] Registering systemd factory
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.734675    3277 factory.go:86] Registering Raw factory
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.735064    3277 manager.go:1106] Started watching for new ooms in manager
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.744217    3277 oomparser.go:185] oomparser using systemd
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.746561    3277 manager.go:288] Starting recovery of all containers
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.797243    3277 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.818071    3277 kubelet_node_status.go:74] Attempting to register node minikube
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.896853    3277 manager.go:293] Recovery completed
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.908612    3277 kubelet_node_status.go:77] Successfully registered node minikube
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.910475    3277 rkt.go:56] starting detectRktContainers thread
May 14 11:44:35 minikube localkube[3277]: I0514 11:44:35.936851    3277 kubelet_network.go:226] Setting Pod CIDR: 10.180.1.0/24 ->
May 14 11:44:36 minikube localkube[3277]: I0514 11:44:36.087909    3277 trace.go:61] Trace "Create /api/v1/namespaces/default/services" (started 2017-05-14 11:44:35.332830751 +0000 UTC):
May 14 11:44:36 minikube localkube[3277]: [72.015碌s] [72.015碌s] About to convert to expected version
May 14 11:44:36 minikube localkube[3277]: [268.443碌s] [196.428碌s] Conversion done
May 14 11:44:36 minikube localkube[3277]: [747.806596ms] [747.538153ms] About to store object in database
May 14 11:44:36 minikube localkube[3277]: [754.947713ms] [7.141117ms] Object stored in database
May 14 11:44:36 minikube localkube[3277]: [754.951833ms] [4.12碌s] Self-link added
May 14 11:44:36 minikube localkube[3277]: [755.040188ms] [88.355碌s] END
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.598638    3277 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.598789    3277 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b54d9aa3-389a-11e7-99b5-080027237788", APIVersion:"v1", ResourceVersion:"36", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.612282    3277 plugins.go:94] No cloud provider specified.
May 14 11:44:37 minikube localkube[3277]: W0514 11:44:37.612383    3277 controllermanager.go:285] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
May 14 11:44:37 minikube localkube[3277]: W0514 11:44:37.612418    3277 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address:
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.612600    3277 nodecontroller.go:189] Sending events to api server.
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.613077    3277 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.613161    3277 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.613960    3277 replication_controller.go:219] Starting RC Manager
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.613973    3277 util.go:45] Metric for replenishment_controller already registered
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.615100    3277 util.go:45] Metric for replenishment_controller already registered
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.615249    3277 util.go:45] Metric for replenishment_controller already registered
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.615447    3277 util.go:45] Metric for replenishment_controller already registered
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.615595    3277 util.go:45] Metric for replenishment_controller already registered
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.659147    3277 controllermanager.go:403] Starting extensions/v1beta1 apis
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.659276    3277 controllermanager.go:406] Starting daemon set controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.660043    3277 controllermanager.go:413] Starting job controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.660665    3277 controllermanager.go:420] Starting deployment controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.661239    3277 controllermanager.go:427] Starting ReplicaSet controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.661844    3277 controllermanager.go:436] Attempting to start horizontal pod autoscaler controller, full resource map map[storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentsta
May 14 11:44:37 minikube localkube[3277]: tuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningReq
May 14 11:44:37 minikube localkube[3277]: uest}],}]
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.664133    3277 controllermanager.go:438] Starting autoscaling/v1 apis
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.664491    3277 daemoncontroller.go:192] Starting Daemon Sets controller manager
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.667524    3277 controllermanager.go:440] Starting horizontal pod controller.
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.667967    3277 controllermanager.go:458] Attempting to start disruption controller, full resource map map[autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false Certifica
May 14 11:44:37 minikube localkube[3277]: teSigningRequest}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],}]
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.668856    3277 controllermanager.go:460] Starting policy/v1beta1 apis
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.668940    3277 controllermanager.go:462] Starting disruption controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.669567    3277 controllermanager.go:470] Attempting to start statefulset, full resource map map[authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false Persist
May 14 11:44:37 minikube localkube[3277]: entVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],}]
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.669750    3277 controllermanager.go:472] Starting apps/v1beta1 apis
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.669767    3277 controllermanager.go:474] Starting StatefulSet controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.670190    3277 controllermanager.go:488] Starting batch/v2alpha1 apis
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.670211    3277 controllermanager.go:490] Starting cronjob controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.671086    3277 deployment_controller.go:132] Starting deployment controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.671146    3277 replica_set.go:162] Starting ReplicaSet controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.671168    3277 horizontal.go:132] Starting HPA Controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.671856    3277 disruption.go:317] Starting disruption controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.671868    3277 disruption.go:319] Sending events to api server.
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.672026    3277 pet_set.go:146] Starting statefulset controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.672061    3277 controller.go:91] Starting CronJob Manager
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.690853    3277 controllermanager.go:544] Attempting to start certificates, full resource map map[apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Jo
May 14 11:44:37 minikube localkube[3277]: b} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],}]
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.693818    3277 controllermanager.go:546] Starting certificates.k8s.io/v1alpha1 apis
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.693870    3277 controllermanager.go:548] Starting certificate request controller
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.694452    3277 controllermanager.go:558] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.694866    3277 util.go:45] Metric for serviceaccount_controller already registered
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.696529    3277 attach_detach_controller.go:204] Starting Attach Detach Controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.696587    3277 serviceaccounts_controller.go:120] Starting ServiceAccount controller
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.716437    3277 garbagecollector.go:766] Garbage Collector: Initializing
May 14 11:44:37 minikube localkube[3277]: E0514 11:44:37.746594    3277 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.832938    3277 nodecontroller.go:429] Initializing eviction metric for zone:
May 14 11:44:37 minikube localkube[3277]: W0514 11:44:37.833002    3277 nodecontroller.go:678] Missing timestamp for Node minikube. Assuming now as a timestamp.
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.833089    3277 nodecontroller.go:569] NodeController detected that all Nodes are not-Ready. Entering master disruption mode.
May 14 11:44:37 minikube localkube[3277]: I0514 11:44:37.833361    3277 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b43f6bb9-389a-11e7-99b5-080027237788", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in NodeController
May 14 11:44:38 minikube localkube[3277]: I0514 11:44:38.264659    3277 leaderelection.go:188] sucessfully acquired lease kube-system/kube-scheduler
May 14 11:44:38 minikube localkube[3277]: I0514 11:44:38.265055    3277 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"b5b324e5-389a-11e7-99b5-080027237788", APIVersion:"v1", ResourceVersion:"45", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
May 14 11:44:40 minikube localkube[3277]: W0514 11:44:40.696958    3277 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist
May 14 11:44:40 minikube localkube[3277]: I0514 11:44:40.773608    3277 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/9b29121baa99a09de533b255cb6c9ea7-addons" (spec.Name: "addons") pod "9b29121baa99a09de533b255cb6c9ea7" (UID: "9b29121baa99a09de533b255cb6c9ea7")
May 14 11:44:40 minikube localkube[3277]: I0514 11:44:40.874185    3277 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/9b29121baa99a09de533b255cb6c9ea7-addons" (spec.Name: "addons") pod "9b29121baa99a09de533b255cb6c9ea7" (UID: "9b29121baa99a09de533b255cb6c9ea7").
May 14 11:44:47 minikube localkube[3277]: I0514 11:44:47.718178    3277 garbagecollector.go:780] Garbage Collector: All monitored resources synced. Proceeding to collect garbage
May 14 11:44:47 minikube localkube[3277]: I0514 11:44:47.836931    3277 nodecontroller.go:585] NodeController detected that some Nodes are Ready. Exiting master disruption mode.
May 14 11:44:51 minikube localkube[3277]: E0514 11:44:51.148076    3277 image.go:84] Failed to fetch: failed to run [fetch --no-store docker://gcr.io/google-containers/kube-addon-manager:v6.3]: exit status 254
May 14 11:44:51 minikube localkube[3277]: stdout:
May 14 11:44:51 minikube localkube[3277]: stderr: Flag --no-store has been deprecated, please use --pull-policy=update
May 14 11:44:51 minikube localkube[3277]: fetch: Get https://gcr.io/v2/: dial tcp [2404:6800:4003:c01::52]:443: connect: network is unreachable



md5-68f02d1e66c366fcd6b3b4114f081f19



$ kubectl describe nodes
Name:           minikube
Role:           
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=minikube
Annotations:        volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:         <none>
CreationTimestamp:  Sun, 14 May 2017 17:14:35 +0530
Phase:          
Conditions:
  Type          Status  LastHeartbeatTime           LastTransitionTime          Reason              Message
  ----          ------  -----------------           ------------------          ------              -------
  OutOfDisk         False   Sun, 14 May 2017 17:16:16 +0530     Sun, 14 May 2017 17:14:35 +0530     KubeletHasSufficientDisk    kubelet has sufficient disk space available
  MemoryPressure    False   Sun, 14 May 2017 17:16:16 +0530     Sun, 14 May 2017 17:14:35 +0530     KubeletHasSufficientMemory  kubelet has sufficient memory available
  DiskPressure      False   Sun, 14 May 2017 17:16:16 +0530     Sun, 14 May 2017 17:14:35 +0530     KubeletHasNoDiskPressure    kubelet has no disk pressure
  Ready         True    Sun, 14 May 2017 17:16:16 +0530     Sun, 14 May 2017 17:14:45 +0530     KubeletReady            kubelet is posting ready status
Addresses:      192.168.99.100,192.168.99.100,minikube
Capacity:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                   2
 memory:                2048620Ki
 pods:                  110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                   2
 memory:                2048620Ki
 pods:                  110
System Info:
 Machine ID:            ce5d48416f0a46dda01417d0c9c65371
 System UUID:           FC19CE28-5DCA-4FA8-8276-CC7D8F9581DB
 Boot ID:           4a6b642c-032d-4b7d-8935-031fc0d8f569
 Kernel Version:        4.7.2
 OS Image:          Buildroot 2016.08
 Operating System:      linux
 Architecture:          amd64
 Container Runtime Version: rkt://1.24.0
 Kubelet Version:       v1.5.3
 Kube-Proxy Version:        v1.5.3
ExternalID:         minikube
Non-terminated Pods:        (1 in total)
  Namespace         Name                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------         ----                    ------------    ----------  --------------- -------------
  kube-system           kube-addon-manager-minikube     5m (0%)     0 (0%)      50Mi (2%)   0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests Memory Limits
  ------------  ----------  --------------- -------------
  5m (0%)   0 (0%)      50Mi (2%)   0 (0%)
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  1m        1m      1   kube-proxy, minikube            Normal      Starting        Starting kube-proxy.
  1m        1m      1   kubelet, minikube           Normal      Starting        Starting kubelet.
  1m        1m      1   kubelet, minikube           Warning     ImageGCFailed       unable to find data for container /
  1m        1m      2   kubelet, minikube           Normal      NodeHasSufficientDisk   Node minikube status is now: NodeHasSufficientDisk
  1m        1m      2   kubelet, minikube           Normal      NodeHasSufficientMemory Node minikube status is now: NodeHasSufficientMemory
  1m        1m      2   kubelet, minikube           Normal      NodeHasNoDiskPressure   Node minikube status is now: NodeHasNoDiskPressure
  1m        1m      1   kubelet, minikube           Normal      NodeReady       Node minikube status is now: NodeReady

@aaratn looks like something different than, because your node get to the Ready state. Maybe submit it as separate issue?

seems to be fixed with 0.19.1

I struggled with this quite a bit myself. The fix for me was to downgrade to minikube v0.19.0 and deleted ~/.kube/config and allow minikube to make the config file for kubectl again

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings

Related issues

vainikkaj picture vainikkaj  路  3Comments

tnine picture tnine  路  3Comments

mrtkp9993 picture mrtkp9993  路  3Comments

Starefossen picture Starefossen  路  3Comments

ahmetb picture ahmetb  路  3Comments