Minikube: Minikube does not create "minikube" context

Created on 28 Nov 2016  Â·  8Comments  Â·  Source: kubernetes/minikube

BUG REPORT

Minikube version (use minikube version):
minikube version: v0.12.2

Environment:

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName):
    "DriverName": "virtualbox"
  • Docker version (e.g. docker -v):
    Docker version 1.12.1, build 23cf638
  • Install tools:
  • Others:
    minikube logs
==> /var/lib/localkube/localkube.err <==
I1128 20:18:12.652151    1806 server.go:203] Using iptables Proxier.
W1128 20:18:12.652592    1806 server.go:426] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/minikube: dial tcp 127.0.0.1:8080: getsockopt: connection refused
W1128 20:18:12.652667    1806 proxier.go:226] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
I1128 20:18:12.652688    1806 server.go:215] Tearing down userspace rules.
E1128 20:18:12.668831    1806 reflector.go:203] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:12.669196    1806 reflector.go:203] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
2016-11-28 20:18:12.721257 I | etcdserver: name = kubeetcd
2016-11-28 20:18:12.721273 I | etcdserver: data dir = /var/lib/localkube/etcd
2016-11-28 20:18:12.721278 I | etcdserver: member dir = /var/lib/localkube/etcd/member
2016-11-28 20:18:12.721282 I | etcdserver: heartbeat = 100ms
2016-11-28 20:18:12.721286 I | etcdserver: election = 1000ms
2016-11-28 20:18:12.721290 I | etcdserver: snapshot count = 10000
2016-11-28 20:18:12.721302 I | etcdserver: advertise client URLs = http://localhost:2379
2016-11-28 20:18:13.304027 I | etcdserver: restarting member 37807cb0bf7500f6 in cluster 2c833ae9c7555b5e at commit index 3622
2016-11-28 20:18:13.308543 I | raft: 37807cb0bf7500f6 became follower at term 4
2016-11-28 20:18:13.331138 I | raft: newRaft 37807cb0bf7500f6 [peers: [], term: 4, commit: 3622, applied: 0, lastindex: 3622, lastterm: 4]
2016-11-28 20:18:13.361574 I | etcdserver: starting server... [version: 3.0.6, cluster version: to_be_decided]
2016-11-28 20:18:13.362241 I | membership: added member 37807cb0bf7500f6 [http://localhost:2380] to cluster 2c833ae9c7555b5e
2016-11-28 20:18:13.362323 N | membership: set the initial cluster version to 3.0
2016-11-28 20:18:13.362350 I | api: enabled capabilities for version 3.0
I1128 20:18:13.372385    1806 genericapiserver.go:629] Will report 10.0.2.15 as public IP address.
W1128 20:18:13.378666    1806 cacher.go:469] Terminating all watchers from cacher *api.Endpoints
E1128 20:18:13.379371    1806 controllermanager.go:125] unable to register configz: register config "componentconfig" twice
E1128 20:18:13.380404    1806 server.go:75] unable to register configz: register config "componentconfig" twice
E1128 20:18:13.381866    1806 server.go:294] unable to register configz: register config "componentconfig" twice
I1128 20:18:13.382226    1806 conntrack.go:40] Setting nf_conntrack_max to 131072
I1128 20:18:13.382719    1806 conntrack.go:57] Setting conntrack hashsize to 32768
I1128 20:18:13.382866    1806 conntrack.go:62] Setting nf_conntrack_tcp_timeout_established to 86400
E1128 20:18:13.384538    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.384633    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.384705    1806 reflector.go:203] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.384756    1806 leaderelection.go:252] error retrieving endpoint: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.385076    1806 leaderelection.go:252] error retrieving endpoint: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused
W1128 20:18:13.385403    1806 cacher.go:469] Terminating all watchers from cacher *api.PodTemplate
W1128 20:18:13.385746    1806 cacher.go:469] Terminating all watchers from cacher *api.LimitRange
W1128 20:18:13.385943    1806 cacher.go:469] Terminating all watchers from cacher *api.Node
W1128 20:18:13.386106    1806 cacher.go:469] Terminating all watchers from cacher *api.ResourceQuota
W1128 20:18:13.386247    1806 cacher.go:469] Terminating all watchers from cacher *api.Secret
W1128 20:18:13.386387    1806 cacher.go:469] Terminating all watchers from cacher *api.ServiceAccount
W1128 20:18:13.386518    1806 cacher.go:469] Terminating all watchers from cacher *api.PersistentVolume
W1128 20:18:13.386729    1806 cacher.go:469] Terminating all watchers from cacher *api.PersistentVolumeClaim
W1128 20:18:13.386984    1806 cacher.go:469] Terminating all watchers from cacher *api.ConfigMap
W1128 20:18:13.387129    1806 cacher.go:469] Terminating all watchers from cacher *api.Namespace
E1128 20:18:13.390048    1806 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
E1128 20:18:13.390091    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:414: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390129    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:409: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390158    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:404: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390189    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:399: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390219    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:398: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390245    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:394: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390278    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:391: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390310    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:388: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.390896    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E1128 20:18:13.391009    1806 reflector.go:214] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:74: Failed to list *storage.StorageClass: Get http://127.0.0.1:8080/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
W1128 20:18:13.391466    1806 cacher.go:469] Terminating all watchers from cacher *rbac.ClusterRoleBinding
W1128 20:18:13.391730    1806 cacher.go:469] Terminating all watchers from cacher *api.Pod
W1128 20:18:13.391933    1806 cacher.go:469] Terminating all watchers from cacher *api.Service
W1128 20:18:13.392082    1806 cacher.go:469] Terminating all watchers from cacher *api.ReplicationController
W1128 20:18:13.392233    1806 cacher.go:469] Terminating all watchers from cacher *apps.PetSet
W1128 20:18:13.392362    1806 cacher.go:469] Terminating all watchers from cacher *autoscaling.HorizontalPodAutoscaler
W1128 20:18:13.392490    1806 cacher.go:469] Terminating all watchers from cacher *batch.Job
W1128 20:18:13.392623    1806 cacher.go:469] Terminating all watchers from cacher *batch.ScheduledJob
W1128 20:18:13.392760    1806 cacher.go:469] Terminating all watchers from cacher *batch.Job
W1128 20:18:13.392894    1806 cacher.go:469] Terminating all watchers from cacher *certificates.CertificateSigningRequest
W1128 20:18:13.393031    1806 cacher.go:469] Terminating all watchers from cacher *autoscaling.HorizontalPodAutoscaler
W1128 20:18:13.393163    1806 cacher.go:469] Terminating all watchers from cacher *api.ReplicationController
W1128 20:18:13.393431    1806 cacher.go:469] Terminating all watchers from cacher *extensions.DaemonSet
W1128 20:18:13.393560    1806 cacher.go:469] Terminating all watchers from cacher *extensions.Deployment
W1128 20:18:13.393695    1806 cacher.go:469] Terminating all watchers from cacher *batch.Job
W1128 20:18:13.393817    1806 cacher.go:469] Terminating all watchers from cacher *extensions.Ingress
W1128 20:18:13.393986    1806 cacher.go:469] Terminating all watchers from cacher *extensions.ReplicaSet
W1128 20:18:13.394116    1806 cacher.go:469] Terminating all watchers from cacher *extensions.NetworkPolicy
W1128 20:18:13.394257    1806 cacher.go:469] Terminating all watchers from cacher *policy.PodDisruptionBudget
W1128 20:18:13.394388    1806 cacher.go:469] Terminating all watchers from cacher *rbac.Role
W1128 20:18:13.394527    1806 cacher.go:469] Terminating all watchers from cacher *rbac.Role
W1128 20:18:13.394647    1806 cacher.go:469] Terminating all watchers from cacher *rbac.RoleBinding
W1128 20:18:13.394775    1806 cacher.go:469] Terminating all watchers from cacher *rbac.ClusterRole
W1128 20:18:13.394897    1806 cacher.go:469] Terminating all watchers from cacher *rbac.ClusterRoleBinding
W1128 20:18:13.395058    1806 cacher.go:469] Terminating all watchers from cacher *rbac.RoleBinding
W1128 20:18:13.395178    1806 cacher.go:469] Terminating all watchers from cacher *rbac.ClusterRole
W1128 20:18:13.398083    1806 server.go:549] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
W1128 20:18:13.402334    1806 cacher.go:469] Terminating all watchers from cacher *storage.StorageClass
[restful] 2016/11/28 20:18:13 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi/
[restful] 2016/11/28 20:18:13 log.go:30: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
E1128 20:18:13.520868    1806 reflector.go:214] pkg/controller/informers/factory.go:72: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I1128 20:18:13.597432    1806 genericapiserver.go:716] Serving securely on 0.0.0.0:8443
I1128 20:18:13.597461    1806 genericapiserver.go:761] Serving insecurely on 127.0.0.1:8080
I1128 20:18:13.757774    1806 docker.go:375] Connecting to docker on unix:///var/run/docker.sock
I1128 20:18:13.757797    1806 docker.go:395] Start docker client with request timeout=2m0s
E1128 20:18:13.757893    1806 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
I1128 20:18:13.762337    1806 manager.go:140] cAdvisor running in container: "/"
W1128 20:18:13.883664    1806 manager.go:148] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1128 20:18:13.887643    1806 fs.go:116] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt/sda1/var/lib/docker/aufs major:8 minor:1 fsType:ext4 blockSize:0}]
I1128 20:18:13.888530    1806 info.go:47] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1128 20:18:13.888564    1806 manager.go:195] Machine: {NumCores:2 CpuFrequency:2394424 MemoryCapacity:2099482624 MachineID: SystemUUID:670AF1CF-4CD8-41A2-B4C0-32EECD7E5856 BootID:5f4e0387-a024-4294-9b05-2ee4c7aef006 Filesystems:[{Device:/dev/sda1 Capacity:19195224064 Type:vfs Inodes:2436448 HasInodes:true} {Device:tmpfs Capacity:1889538048 Type:vfs Inodes:256284 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:deadline} 251:0:{Name:zram0 Major:251 Minor:0 Size:469385216 Scheduler:none}] NetworkDevices:[{Name:dummy0 MacAddress:e6:93:bd:2a:4f:c6 Speed:0 Mtu:1500} {Name:eth0 MacAddress:08:00:27:1c:2c:16 Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:41:82:10 Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1128 20:18:13.889070    1806 manager.go:201] Version: {KernelVersion:4.4.14-boot2docker ContainerOsVersion:Boot2Docker 1.11.1 (TCL 7.1); master : 901340f - Fri Jul  1 22:52:19 UTC 2016 DockerVersion:1.11.1 CadvisorVersion: CadvisorRevision:}
I1128 20:18:13.889657    1806 kubelet.go:252] Adding manifest file: /etc/kubernetes/manifests
I1128 20:18:13.889674    1806 kubelet.go:262] Watching apiserver
I1128 20:18:13.892469    1806 kubelet.go:503] Using node IP: "192.168.99.100"
W1128 20:18:13.892511    1806 kubelet_network.go:71] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I1128 20:18:13.892533    1806 kubelet.go:513] Hairpin mode set to "hairpin-veth"
I1128 20:18:13.905255    1806 docker_manager.go:242] Setting dockerRoot to /mnt/sda1/var/lib/docker
I1128 20:18:13.906083    1806 server.go:714] Started kubelet v1.4.3
E1128 20:18:13.906141    1806 kubelet.go:1091] Image garbage collection failed: unable to find data for container /
I1128 20:18:13.907009    1806 server.go:118] Starting to listen on 0.0.0.0:10250
I1128 20:18:13.922093    1806 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1128 20:18:13.922132    1806 status_manager.go:129] Starting to sync pod status with apiserver
I1128 20:18:13.922152    1806 kubelet.go:2226] Starting kubelet main sync loop.
I1128 20:18:13.922160    1806 kubelet.go:2237] skipping pod synchronization - [network state unknown container runtime is down]
E1128 20:18:13.923187    1806 container_manager_linux.go:567] error opening pid file /run/docker/libcontainerd/docker-containerd.pid: open /run/docker/libcontainerd/docker-containerd.pid: no such file or directory
I1128 20:18:13.924788    1806 volume_manager.go:234] Starting Kubelet Volume Manager
I1128 20:18:13.958850    1806 factory.go:295] Registering Docker factory
W1128 20:18:13.958979    1806 manager.go:244] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I1128 20:18:13.958997    1806 factory.go:54] Registering systemd factory
I1128 20:18:13.959363    1806 factory.go:86] Registering Raw factory
I1128 20:18:13.959705    1806 manager.go:1082] Started watching for new ooms in manager
W1128 20:18:13.959822    1806 manager.go:272] Could not configure a source for OOM detection, disabling OOM events: unable to find any kernel log file available from our set: [/var/log/kern.log /var/log/messages /var/log/syslog]
I1128 20:18:13.961817    1806 manager.go:285] Starting recovery of all containers
I1128 20:18:13.962608    1806 manager.go:290] Recovery completed
I1128 20:18:14.025176    1806 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
I1128 20:18:14.029380    1806 kubelet_node_status.go:73] Attempting to register node minikube
E1128 20:18:14.123242    1806 docker_manager.go:2563] Unable to inspect container "1ca32e967cc0b54214770c97143cb875fab1433e26f18660db2d49c6fcbe7929": no such container: "1ca32e967cc0b54214770c97143cb875fab1433e26f18660db2d49c6fcbe7929"
2016-11-28 20:18:14.331586 I | raft: 37807cb0bf7500f6 is starting a new election at term 4
2016-11-28 20:18:14.331654 I | raft: 37807cb0bf7500f6 became candidate at term 5
2016-11-28 20:18:14.331699 I | raft: 37807cb0bf7500f6 received vote from 37807cb0bf7500f6 at term 5
2016-11-28 20:18:14.331731 I | raft: 37807cb0bf7500f6 became leader at term 5
2016-11-28 20:18:14.331743 I | raft: raft.node: 37807cb0bf7500f6 elected leader 37807cb0bf7500f6 at term 5
2016-11-28 20:18:14.332533 I | etcdserver: published {Name:kubeetcd ClientURLs:[http://localhost:2379]} to cluster 2c833ae9c7555b5e
I1128 20:18:14.332725    1806 trace.go:61] Trace "etcdHelper::Create *api.Node" (started 2016-11-28 20:18:14.030410421 +0000 UTC):
[133.167µs] [133.167µs] Object encoded
[134.775µs] [1.608µs] Version checked
[302.221338ms] [302.086563ms] Object created
[302.282716ms] [61.378µs] END
I1128 20:18:14.333492    1806 trace.go:61] Trace "Create /api/v1/nodes" (started 2016-11-28 20:18:14.030148229 +0000 UTC):
[22.681µs] [22.681µs] About to convert to expected version
[74.573µs] [51.892µs] Conversion done
[84.822µs] [10.249µs] About to store object in database
[303.3053ms] [303.220478ms] END
I1128 20:18:14.335740    1806 kubelet_node_status.go:112] Node minikube was previously registered
I1128 20:18:14.335759    1806 kubelet_node_status.go:76] Successfully registered node minikube
I1128 20:18:14.615210    1806 trace.go:61] Trace "Create /api/v1/namespaces/default/events" (started 2016-11-28 20:18:13.906940983 +0000 UTC):
[22.033µs] [22.033µs] About to convert to expected version
[43.385µs] [21.352µs] Conversion done
[706.288324ms] [706.244939ms] About to store object in database
[708.153301ms] [1.864977ms] Object stored in database
[708.162813ms] [9.512µs] Self-link added
[708.237304ms] [74.491µs] END
E1128 20:18:15.913146    1806 event.go:258] Could not construct reference to: '&api.Endpoints{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-scheduler", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]api.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' '%v became leader' 'minikube'
I1128 20:18:15.921592    1806 leaderelection.go:214] sucessfully acquired lease kube-system/kube-scheduler
E1128 20:18:16.980364    1806 event.go:258] Could not construct reference to: '&api.Endpoints{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-controller-manager", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]api.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' '%v became leader' 'minikube'
I1128 20:18:16.980404    1806 leaderelection.go:214] sucessfully acquired lease kube-system/kube-controller-manager
I1128 20:18:16.981094    1806 plugins.go:71] No cloud provider specified.
W1128 20:18:16.981118    1806 controllermanager.go:232] Unsuccessful parsing of cluster CIDR : invalid CIDR address: 
W1128 20:18:16.981139    1806 controllermanager.go:236] Unsuccessful parsing of service CIDR : invalid CIDR address: 
I1128 20:18:16.981226    1806 nodecontroller.go:193] Sending events to api server.
E1128 20:18:16.981441    1806 controllermanager.go:250] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
I1128 20:18:16.981462    1806 controllermanager.go:267] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
E1128 20:18:16.981777    1806 util.go:45] Metric for replenishment_controller already registered
E1128 20:18:16.981849    1806 util.go:45] Metric for replenishment_controller already registered
E1128 20:18:16.981860    1806 util.go:45] Metric for replenishment_controller already registered
E1128 20:18:16.981868    1806 util.go:45] Metric for replenishment_controller already registered
E1128 20:18:16.981874    1806 util.go:45] Metric for replenishment_controller already registered
I1128 20:18:16.981973    1806 replication_controller.go:219] Starting RC Manager
I1128 20:18:17.018774    1806 controllermanager.go:326] Starting extensions/v1beta1 apis
I1128 20:18:17.018807    1806 controllermanager.go:328] Starting horizontal pod controller.
I1128 20:18:17.019009    1806 controllermanager.go:343] Starting daemon set controller
I1128 20:18:17.019257    1806 controllermanager.go:350] Starting job controller
I1128 20:18:17.019475    1806 controllermanager.go:357] Starting deployment controller
I1128 20:18:17.021035    1806 controllermanager.go:364] Starting ReplicaSet controller
I1128 20:18:17.021236    1806 controllermanager.go:373] Attempting to start disruption controller, full resource map map[authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{subjectaccessreviews false SubjectAccessReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} policy/v1alpha1:&APIResourceList{GroupVersion:policy/v1alpha1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1alpha1:&APIResourceList{GroupVersion:apps/v1alpha1,APIResources:[{petsets true PetSet} {petsets/status true PetSet}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],}]
I1128 20:18:17.021408    1806 controllermanager.go:375] Starting policy/v1alpha1 apis
I1128 20:18:17.021423    1806 controllermanager.go:377] Starting disruption controller
I1128 20:18:17.021495    1806 controllermanager.go:385] Attempting to start petset, full resource map map[storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} policy/v1alpha1:&APIResourceList{GroupVersion:policy/v1alpha1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1alpha1:&APIResourceList{GroupVersion:apps/v1alpha1,APIResources:[{petsets true PetSet} {petsets/status true PetSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{subjectaccessreviews false SubjectAccessReview}],}]
I1128 20:18:17.021625    1806 controllermanager.go:387] Starting apps/v1alpha1 apis
I1128 20:18:17.021638    1806 controllermanager.go:389] Starting PetSet controller
I1128 20:18:17.021774    1806 controllermanager.go:404] Starting batch/v2alpha1 apis
I1128 20:18:17.021829    1806 controllermanager.go:406] Starting scheduledjob controller
I1128 20:18:17.022334    1806 horizontal.go:126] Starting HPA Controller
I1128 20:18:17.027624    1806 daemoncontroller.go:235] Starting Daemon Sets controller manager
I1128 20:18:17.028135    1806 disruption.go:256] Starting disruption controller
I1128 20:18:17.028141    1806 disruption.go:258] Sending events to api server.
I1128 20:18:17.028228    1806 pet_set.go:144] Starting petset controller
I1128 20:18:17.028268    1806 controller.go:90] Starting ScheduledJob Manager
I1128 20:18:17.037468    1806 controllermanager.go:460] Attempting to start certificates, full resource map map[storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} policy/v1alpha1:&APIResourceList{GroupVersion:policy/v1alpha1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1alpha1:&APIResourceList{GroupVersion:apps/v1alpha1,APIResources:[{petsets true PetSet} {petsets/status true PetSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{subjectaccessreviews false SubjectAccessReview}],}]
I1128 20:18:17.037979    1806 controllermanager.go:462] Starting certificates.k8s.io/v1alpha1 apis
I1128 20:18:17.038026    1806 controllermanager.go:464] Starting certificate request controller
E1128 20:18:17.038342    1806 controllermanager.go:474] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
E1128 20:18:17.042316    1806 util.go:45] Metric for serviceaccount_controller already registered
I1128 20:18:17.043710    1806 attach_detach_controller.go:197] Starting Attach Detach Controller
I1128 20:18:17.046631    1806 controller.go:105] Found 0 scheduledjobs
I1128 20:18:17.057823    1806 controller.go:113] Found 0 jobs
I1128 20:18:17.057840    1806 controller.go:116] Found 0 groups
I1128 20:18:17.064124    1806 garbagecollector.go:755] Garbage Collector: Initializing
I1128 20:18:17.105378    1806 replication_controller.go:625] Waiting for pods controller to sync, requeuing rc kube-system/kubernetes-dashboard
I1128 20:18:17.105484    1806 endpoints_controller.go:326] Waiting for pods controller to sync, requeuing service default/kubernetes
I1128 20:18:17.105522    1806 endpoints_controller.go:326] Waiting for pods controller to sync, requeuing service kube-system/kube-dns
I1128 20:18:17.105772    1806 endpoints_controller.go:326] Waiting for pods controller to sync, requeuing service kube-system/kubernetes-dashboard
I1128 20:18:17.105912    1806 replication_controller.go:625] Waiting for pods controller to sync, requeuing rc kube-system/kube-dns-v20
I1128 20:18:17.184445    1806 nodecontroller.go:522] Initilizing eviction metric for zone: 
W1128 20:18:17.184484    1806 nodecontroller.go:782] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1128 20:18:17.184510    1806 nodecontroller.go:707] NodeController detected that zone  is now in state Normal.
I1128 20:18:17.184695    1806 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b0334da1-b5a4-11e6-a585-f6e2e4d30476", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in NodeController
W1128 20:18:18.928345    1806 pod_container_deletor.go:77] Container "f198cc64b05b571f5206c4b016e8400945f4a0dc859e1249f1d32796d357ca29" not found in pod's containers
W1128 20:18:18.928380    1806 pod_container_deletor.go:77] Container "91ae70eafcf64b62891a6aed7b9c0067461b31a7e175c30c95859c413c6d2e22" not found in pod's containers
W1128 20:18:18.928412    1806 pod_container_deletor.go:77] Container "b4ea0c01e99f63047e972dc52a773e52657541d311d5393b0c86c21c863b4151" not found in pod's containers
W1128 20:18:18.928456    1806 pod_container_deletor.go:77] Container "c7481028f198165caa91d4d07cc34d41ad3fc348e8b5aa73f70b77a3ebe2f44f" not found in pod's containers
W1128 20:18:18.928481    1806 pod_container_deletor.go:77] Container "efbed5a0a3cec576cebb4c10df80c26109670294da3fe04b92c1beecc04ec331" not found in pod's containers
W1128 20:18:18.928502    1806 pod_container_deletor.go:77] Container "1ca32e967cc0b54214770c97143cb875fab1433e26f18660db2d49c6fcbe7929" not found in pod's containers
W1128 20:18:18.928520    1806 pod_container_deletor.go:77] Container "9ca0967843ea6758316829d425e7922a6b833a984b2dc36e94533c417b389719" not found in pod's containers
E1128 20:18:18.929267    1806 kubelet.go:1799] Failed creating a mirror pod for "kube-addon-manager-minikube_kube-system(46ae05e07c52d84167b077b142aa4a39)": pods "kube-addon-manager-minikube" already exists
I1128 20:18:18.996962    1806 replication_controller.go:322] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
I1128 20:18:19.092729    1806 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/46ae05e07c52d84167b077b142aa4a39-addons" (spec.Name: "addons") pod "46ae05e07c52d84167b077b142aa4a39" (UID: "46ae05e07c52d84167b077b142aa4a39")
I1128 20:18:19.092795    1806 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476")
I1128 20:18:19.092921    1806 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476")
I1128 20:18:19.193232    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/host-path/46ae05e07c52d84167b077b142aa4a39-addons" (spec.Name: "addons") to pod "46ae05e07c52d84167b077b142aa4a39" (UID: "46ae05e07c52d84167b077b142aa4a39"). 
I1128 20:18:19.193288    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/46ae05e07c52d84167b077b142aa4a39-addons" (spec.Name: "addons") pod "46ae05e07c52d84167b077b142aa4a39" (UID: "46ae05e07c52d84167b077b142aa4a39").
I1128 20:18:19.193328    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") to pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476"). 
I1128 20:18:19.193389    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") to pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476"). 
I1128 20:18:19.226202    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476").
I1128 20:18:19.226203    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476").
E1128 20:18:19.305009    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
E1128 20:18:19.368398    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
E1128 20:18:19.443841    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
E1128 20:18:19.643573    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
I1128 20:18:19.683561    1806 docker_manager.go:2162] Determined pod ip after infra change: "kube-dns-v20-8nezr_kube-system(f7ffc679-b5a4-11e6-a585-f6e2e4d30476)": "172.17.0.2"
I1128 20:18:19.731727    1806 docker_manager.go:2162] Determined pod ip after infra change: "kubernetes-dashboard-6y9oi_kube-system(f7ffadc3-b5a4-11e6-a585-f6e2e4d30476)": "172.17.0.3"
E1128 20:18:19.782153    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
E1128 20:18:19.953803    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
I1128 20:18:20.216389    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") to pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476"). Volume is already mounted to pod, but remount was requested.
I1128 20:18:20.218006    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476").
I1128 20:18:20.355285    1806 replication_controller.go:322] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
E1128 20:18:20.447411    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
E1128 20:18:21.281601    1806 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
I1128 20:18:21.336534    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") to pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476"). Volume is already mounted to pod, but remount was requested.
I1128 20:18:21.338430    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f7ffadc3-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffadc3-b5a4-11e6-a585-f6e2e4d30476").
I1128 20:18:22.344768    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") to pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476"). Volume is already mounted to pod, but remount was requested.
I1128 20:18:22.347995    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476").
I1128 20:18:23.358020    1806 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") to pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476"). Volume is already mounted to pod, but remount was requested.
I1128 20:18:23.359948    1806 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f7ffc679-b5a4-11e6-a585-f6e2e4d30476-default-token-z0bm3" (spec.Name: "default-token-z0bm3") pod "f7ffc679-b5a4-11e6-a585-f6e2e4d30476" (UID: "f7ffc679-b5a4-11e6-a585-f6e2e4d30476").

==> /var/lib/localkube/localkube.out <==
Starting etcd...
Starting apiserver...
Starting controller-manager...
Starting scheduler...
Starting kubelet...
Starting proxy...
minikube status
minikubeVM: Running
localkube: Running



md5-cf852385f108c3271aa6dd910527cd6c



kubectl config use-context minikube
no context exists with the name: "minikube".

What happened:
Minikube did not create "minikube" context

What you expected to happen:
Minikube did create "minikube" context

How to reproduce it (as minimally and precisely as possible):
I just run minikube start and get this problem

Anything else do we need to know:
File ~/.kube/config does not exists

kinbug

Most helpful comment

you're welcome @dlorenc
Given that I did the following working around

unset KUBECONFIG
minikube start

and it works

All 8 comments

Would you mind attaching the logs of "minikube start --v=10"?

@dlorenc, here it is
minikube.log.tar.gz

@franciscocpg that still looks like the output of "minikube logs", which contains the logs from the running cluster. Could you attach the terminal output of the "minikube start" command itself?

minikube start --v=10
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

Hmm, the "Kubectl is now configured to use the cluster." line indicates minikube thinks it modified your .kubecfg.

We default to $HOME/.kube/config, but if you have a $KUBECONFIG environment variable set we'll use that path instead. Do you happen to have that set?

Yes, I have, it's a list.

/home/francisco/programas/vagrant/coreos-kubernetes/multi-node/vagrant/kubeconfig:/home/francisco/programas/vagrant/coreos-kubernetes/single-node/kubeconfig:/home/francisco/.kube/config

Anyway the minikube context does not exists in any of these 3 files

Ah, thanks! It looks like we're not properly handling the case where this is a list.

you're welcome @dlorenc
Given that I did the following working around

unset KUBECONFIG
minikube start

and it works

Was this page helpful?
0 / 5 - 0 ratings