I've already tried minikube delete and minikube start for many times.
Everytime VirtualBox gives minikube new IP and kube-apiserver correctly advertises it.
In logs below you can see that it is trying to do something about kubernetes-dashboard during fresh minikube start.
Failing start:
(β |minikube:default)β― minikube start -v 3
π minikube v1.0.0 on darwin (amd64)
π€Ή Downloading Kubernetes v1.14.0 images in the background ...
π₯ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
Downloading /Users/den/.minikube/cache/boot2docker.iso from file:///Users/den/.minikube/cache/iso/minikube-v1.0.0.iso...
Creating VirtualBox VM...
Creating SSH key...
Starting the VM...
Check network to re-create if needed...
Waiting for an IP...
Setting Docker configuration on the remote daemon...
πΆ "minikube" IP address is 192.168.99.104
π³ Configuring Docker as the container runtime ...
π³ Version of container runtime is 18.06.2-ce
β Waiting for image downloads to complete ...
β¨ Preparing Kubernetes environment ...
π Pulling images required by Kubernetes v1.14.0 ...
π Launching Kubernetes v1.14.0 using kubeadm ...
β Waiting for pods: apiserver
π£ Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
π https://github.com/kubernetes/minikube/issues/new
β Problems detected in "kube-addon-manager":
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
minikube docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
50c2dab08e5d eb516548c180 "/coredns -conf /etcβ¦" 10 minutes ago Up 10 minutes k8s_coredns_coredns-fb8b8dccf-rrhpm_kube-system_44543194-5521-11e9-94e5-080027313ef5_1
3a93f124e2be eb516548c180 "/coredns -conf /etcβ¦" 10 minutes ago Up 10 minutes k8s_coredns_coredns-fb8b8dccf-qgcbj_kube-system_445ea33e-5521-11e9-94e5-080027313ef5_1
82972e5a695d 4689081edb10 "/storage-provisioner" 11 minutes ago Up 11 minutes k8s_storage-provisioner_storage-provisioner_kube-system_45b09a43-5521-11e9-94e5-080027313ef5_0
d70e0ff82580 k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_storage-provisioner_kube-system_45b09a43-5521-11e9-94e5-080027313ef5_0
9a2eb5e18248 k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kubernetes-dashboard-79dd6bfc48-7967t_kube-system_4502d900-5521-11e9-94e5-080027313ef5_0
e898502bdbc0 k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_coredns-fb8b8dccf-rrhpm_kube-system_44543194-5521-11e9-94e5-080027313ef5_0
b91066cdf2cd k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_coredns-fb8b8dccf-qgcbj_kube-system_445ea33e-5521-11e9-94e5-080027313ef5_0
7e4a022bf68a 5cd54e388aba "/usr/local/bin/kubeβ¦" 11 minutes ago Up 11 minutes k8s_kube-proxy_kube-proxy-49kww_kube-system_441553b5-5521-11e9-94e5-080027313ef5_0
f6d25777c791 k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-proxy-49kww_kube-system_441553b5-5521-11e9-94e5-080027313ef5_0
02a417334df5 00638a24688b "kube-scheduler --biβ¦" 11 minutes ago Up 11 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_0
d2f2087917ac b95b1efa0436 "kube-controller-manβ¦" 11 minutes ago Up 11 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_2899d819dcdb72186fb15d30a0cc5a71_0
e52c2118467c 119701e77cbc "/opt/kube-addons.sh" 11 minutes ago Up 11 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_0
48dc1c1eebf7 2c4adeb21b4f "etcd --advertise-clβ¦" 11 minutes ago Up 11 minutes k8s_etcd_etcd-minikube_kube-system_3652b681687e177385391746e1d321b1_0
9c0890f149f4 ecf910f40d6e "kube-apiserver --adβ¦" 11 minutes ago Up 11 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_63e0ec6b1149dd8f56bfa9f9a217e7ea_0
1bbb192cbd0c k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-controller-manager-minikube_kube-system_2899d819dcdb72186fb15d30a0cc5a71_0
d2f024251d63 k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-apiserver-minikube_kube-system_63e0ec6b1149dd8f56bfa9f9a217e7ea_0
51866f6d5a5d k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_0
9e84437c97c3 k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_etcd-minikube_kube-system_3652b681687e177385391746e1d321b1_0
50cb48d8eb7c k8s.gcr.io/pause:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_0
netstat
$ ss -tuln
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 6912 0 0.0.0.0:5355 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:52556 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:59760 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:38719 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:910 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:2049 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:52274 0.0.0.0:*
udp UNCONN 0 0 127.0.0.53:53 0.0.0.0:*
udp UNCONN 0 0 192.168.99.104:68 0.0.0.0:*
udp UNCONN 0 0 10.0.2.15:68 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:*
udp UNCONN 13824 0 *:5355 *:*
udp UNCONN 0 0 *:910 *:*
udp UNCONN 0 0 *:55384 *:*
udp UNCONN 0 0 *:111 *:*
tcp LISTEN 0 0 127.0.0.1:10248 0.0.0.0:*
tcp LISTEN 0 0 127.0.0.1:10249 0.0.0.0:*
tcp LISTEN 0 0 192.168.99.104:2379 0.0.0.0:*
tcp LISTEN 0 0 127.0.0.1:2379 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:5355 0.0.0.0:*
tcp LISTEN 0 0 192.168.99.104:2380 0.0.0.0:*
tcp LISTEN 0 0 127.0.0.1:44463 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:111 0.0.0.0:*
tcp LISTEN 0 0 127.0.0.1:10257 0.0.0.0:*
tcp LISTEN 0 0 127.0.0.1:10259 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:35061 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:43893 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:22 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:47959 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:42329 0.0.0.0:*
tcp LISTEN 0 0 0.0.0.0:2049 0.0.0.0:*
tcp LISTEN 0 0 *:2376 *:*
tcp LISTEN 0 0 *:10250 *:*
tcp LISTEN 0 0 *:10251 *:*
tcp LISTEN 0 0 *:5355 *:*
tcp LISTEN 0 0 *:10252 *:*
tcp LISTEN 0 0 *:10255 *:*
tcp LISTEN 0 0 *:111 *:*
tcp LISTEN 0 0 *:10256 *:*
tcp LISTEN 0 0 *:22 *:*
tcp LISTEN 0 0 *:8443 *:*
tcp LISTEN 0 0 *:44193 *:*
apiserver command
"Entrypoint": [
"kube-apiserver",
"--advertise-address=192.168.99.104",
"--allow-privileged=true",
"--authorization-mode=Node,RBAC",
"--client-ca-file=/var/lib/minikube/certs/ca.crt",
"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota",
"--enable-bootstrap-token-auth=true",
"--etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt",
"--etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt",
"--etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key",
"--etcd-servers=https://127.0.0.1:2379",
"--insecure-port=0",
"--kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt",
"--kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt",
"--proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key",
"--requestheader-allowed-names=front-proxy-client",
"--requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt",
"--requestheader-extra-headers-prefix=X-Remote-Extra-",
"--requestheader-group-headers=X-Remote-Group",
"--requestheader-username-headers=X-Remote-User",
"--secure-port=8443",
"--service-account-key-file=/var/lib/minikube/certs/sa.pub",
"--service-cluster-ip-range=10.96.0.0/12",
"--tls-cert-file=/var/lib/minikube/certs/apiserver.crt",
"--tls-private-key-file=/var/lib/minikube/certs/apiserver.key"
minikube logs
==> coredns <==
.:53
2019-04-02T08:28:50.571Z [INFO] CoreDNS-1.3.1
2019-04-02T08:28:50.571Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-04-02T08:28:50.571Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
==> dmesg <==
[ +5.001708] hpet1: lost 318 rtc interrupts
[ +5.001672] hpet1: lost 318 rtc interrupts
[ +5.000781] hpet1: lost 318 rtc interrupts
[ +5.002050] hpet1: lost 318 rtc interrupts
[ +5.002552] hpet1: lost 318 rtc interrupts
[ +5.000737] hpet1: lost 318 rtc interrupts
[ +5.001971] hpet1: lost 318 rtc interrupts
[ +5.002222] hpet1: lost 318 rtc interrupts
[ +5.001535] hpet1: lost 318 rtc interrupts
[ +5.001552] hpet1: lost 319 rtc interrupts
[ +5.001800] hpet1: lost 318 rtc interrupts
[Apr 2 08:41] hpet1: lost 318 rtc interrupts
[ +5.002090] hpet1: lost 318 rtc interrupts
[ +5.001285] hpet1: lost 318 rtc interrupts
[ +5.002381] hpet1: lost 318 rtc interrupts
[ +5.001423] hpet1: lost 318 rtc interrupts
[ +5.001680] hpet1: lost 318 rtc interrupts
[ +5.001226] hpet1: lost 319 rtc interrupts
[ +5.001610] hpet1: lost 318 rtc interrupts
[ +5.002007] hpet1: lost 318 rtc interrupts
[ +5.001114] hpet1: lost 318 rtc interrupts
[ +5.003403] hpet1: lost 318 rtc interrupts
[ +5.000661] hpet1: lost 319 rtc interrupts
[Apr 2 08:42] hpet1: lost 318 rtc interrupts
[ +4.999728] hpet1: lost 318 rtc interrupts
[ +5.001389] hpet1: lost 319 rtc interrupts
[ +5.000668] hpet1: lost 318 rtc interrupts
[ +5.001369] hpet1: lost 318 rtc interrupts
[ +5.001228] hpet1: lost 319 rtc interrupts
[ +5.001820] hpet1: lost 318 rtc interrupts
[ +5.001890] hpet1: lost 318 rtc interrupts
[ +5.002791] hpet1: lost 318 rtc interrupts
[ +5.002554] hpet1: lost 318 rtc interrupts
[ +5.001468] hpet1: lost 318 rtc interrupts
[ +5.001074] hpet1: lost 319 rtc interrupts
[Apr 2 08:43] hpet1: lost 318 rtc interrupts
[ +5.001318] hpet1: lost 318 rtc interrupts
[ +5.000832] hpet1: lost 318 rtc interrupts
[ +5.002044] hpet1: lost 319 rtc interrupts
[ +5.000449] hpet1: lost 318 rtc interrupts
[ +5.001798] hpet1: lost 318 rtc interrupts
[ +5.000245] hpet1: lost 318 rtc interrupts
[ +5.003683] hpet1: lost 318 rtc interrupts
[ +4.999577] hpet1: lost 318 rtc interrupts
[ +5.001581] hpet1: lost 318 rtc interrupts
[ +5.001233] hpet1: lost 318 rtc interrupts
[ +5.000875] hpet1: lost 318 rtc interrupts
[Apr 2 08:44] hpet1: lost 318 rtc interrupts
[ +5.001976] hpet1: lost 319 rtc interrupts
[ +5.001521] hpet1: lost 319 rtc interrupts
==> kernel <==
08:44:17 up 18 min, 1 user, load average: 0.94, 1.30, 0.90
Linux minikube 4.15.0 #1 SMP Tue Mar 26 02:53:14 UTC 2019 x86_64 GNU/Linux
==> kube-addon-manager <==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T08:38:19+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T08:39:17+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
error: no objects passed to apply
error: no objects passed to apply
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T08:39:19+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T08:40:17+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T08:40:19+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T08:41:17+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
error: no objects passed to apply
INFO: == Kubernetes addon reconcile completed at 2019-04-02T08:41:18+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T08:42:18+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T08:42:19+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T08:43:17+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T08:43:19+00:00 ==
==> kube-apiserver <==
I0402 08:43:52.766477 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:52.766609 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:53.766757 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:53.767108 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:54.767488 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:54.767979 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:55.769332 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:55.769836 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:56.770964 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:56.771447 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:57.771810 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:57.772037 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:58.772515 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:58.772611 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:43:59.772965 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:43:59.773411 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:00.773555 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:00.773677 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:01.774134 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:01.774340 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:02.774538 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:02.774710 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:03.775459 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:03.775579 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:04.776725 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:04.776905 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:05.777501 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:05.777765 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:06.778135 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:06.778415 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:07.779813 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:07.780290 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:08.780771 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:08.780879 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:09.781063 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:09.781435 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:10.781966 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:10.782148 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:11.782625 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:11.783390 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:12.783842 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:12.783957 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:13.784974 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:13.785301 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:14.785822 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:14.785978 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:15.786196 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:15.786504 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 08:44:16.786758 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 08:44:16.786913 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
==> kube-proxy <==
W0402 08:28:19.494988 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0402 08:28:19.509397 1 server_others.go:148] Using iptables Proxier.
W0402 08:28:19.509516 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0402 08:28:19.509641 1 server_others.go:178] Tearing down inactive rules.
I0402 08:28:19.661936 1 server.go:555] Version: v1.14.0
I0402 08:28:19.675904 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0402 08:28:19.675940 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0402 08:28:19.676463 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0402 08:28:19.680557 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0402 08:28:19.680598 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0402 08:28:19.681069 1 config.go:202] Starting service config controller
I0402 08:28:19.681129 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0402 08:28:19.681365 1 config.go:102] Starting endpoints config controller
I0402 08:28:19.681414 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0402 08:28:19.782028 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0402 08:28:19.782135 1 controller_utils.go:1034] Caches are synced for service config controller
==> kube-scheduler <==
I0402 08:28:03.126002 1 serving.go:319] Generated self-signed cert in-memory
W0402 08:28:03.929040 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0402 08:28:03.929192 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0402 08:28:03.929467 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0402 08:28:03.934392 1 server.go:142] Version: v1.14.0
I0402 08:28:03.934785 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0402 08:28:03.936775 1 authorization.go:47] Authorization is disabled
W0402 08:28:03.936929 1 authentication.go:55] Authentication is disabled
I0402 08:28:03.937039 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0402 08:28:03.937603 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0402 08:28:06.878750 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0402 08:28:06.887626 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0402 08:28:06.887702 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0402 08:28:06.888053 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0402 08:28:06.888299 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0402 08:28:06.888862 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0402 08:28:06.919797 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0402 08:28:06.920131 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0402 08:28:06.920421 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0402 08:28:06.930975 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0402 08:28:07.880304 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0402 08:28:07.889652 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0402 08:28:07.890945 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0402 08:28:07.896827 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0402 08:28:07.900875 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0402 08:28:07.901839 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0402 08:28:07.922752 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0402 08:28:07.923341 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0402 08:28:07.931203 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0402 08:28:07.933210 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0402 08:28:09.741296 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0402 08:28:09.841812 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0402 08:28:09.842173 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0402 08:28:09.853042 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Tue 2019-04-02 08:26:32 UTC, end at Tue 2019-04-02 08:44:17 UTC. --
Apr 02 08:33:47 minikube kubelet[3236]: E0402 08:33:47.656868 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:34:02 minikube kubelet[3236]: E0402 08:34:02.040306 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:34:06 minikube kubelet[3236]: E0402 08:34:06.870151 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:34:20 minikube kubelet[3236]: E0402 08:34:20.658537 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:34:35 minikube kubelet[3236]: E0402 08:34:35.657642 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:34:50 minikube kubelet[3236]: E0402 08:34:50.657276 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:35:03 minikube kubelet[3236]: E0402 08:35:03.657692 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:35:16 minikube kubelet[3236]: E0402 08:35:16.657615 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:35:30 minikube kubelet[3236]: E0402 08:35:30.656668 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:35:44 minikube kubelet[3236]: E0402 08:35:44.657141 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:35:55 minikube kubelet[3236]: E0402 08:35:55.656768 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:36:08 minikube kubelet[3236]: E0402 08:36:08.657109 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:36:23 minikube kubelet[3236]: E0402 08:36:23.657305 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:36:36 minikube kubelet[3236]: E0402 08:36:36.656922 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:36:47 minikube kubelet[3236]: E0402 08:36:47.657208 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:36:59 minikube kubelet[3236]: E0402 08:36:59.657140 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:37:11 minikube kubelet[3236]: E0402 08:37:11.657380 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:37:26 minikube kubelet[3236]: E0402 08:37:26.657025 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:37:40 minikube kubelet[3236]: E0402 08:37:40.657328 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:37:52 minikube kubelet[3236]: E0402 08:37:52.656973 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:38:03 minikube kubelet[3236]: E0402 08:38:03.657909 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:38:17 minikube kubelet[3236]: E0402 08:38:17.656794 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:38:29 minikube kubelet[3236]: E0402 08:38:29.657290 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:38:42 minikube kubelet[3236]: E0402 08:38:42.657355 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:38:55 minikube kubelet[3236]: E0402 08:38:55.657375 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:39:07 minikube kubelet[3236]: E0402 08:39:07.620994 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:39:16 minikube kubelet[3236]: E0402 08:39:16.870707 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:39:29 minikube kubelet[3236]: E0402 08:39:29.656553 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:39:41 minikube kubelet[3236]: E0402 08:39:41.656972 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:39:53 minikube kubelet[3236]: E0402 08:39:53.657890 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:40:06 minikube kubelet[3236]: E0402 08:40:06.656875 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:40:19 minikube kubelet[3236]: E0402 08:40:19.656927 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:40:31 minikube kubelet[3236]: E0402 08:40:31.656651 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:40:43 minikube kubelet[3236]: E0402 08:40:43.656626 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:40:56 minikube kubelet[3236]: E0402 08:40:56.656907 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:41:09 minikube kubelet[3236]: E0402 08:41:09.657823 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:41:23 minikube kubelet[3236]: E0402 08:41:23.656895 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:41:37 minikube kubelet[3236]: E0402 08:41:37.657676 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:41:52 minikube kubelet[3236]: E0402 08:41:52.657223 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:42:04 minikube kubelet[3236]: E0402 08:42:04.656967 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:42:16 minikube kubelet[3236]: E0402 08:42:16.657449 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:42:31 minikube kubelet[3236]: E0402 08:42:31.657018 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:42:46 minikube kubelet[3236]: E0402 08:42:46.656736 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:42:59 minikube kubelet[3236]: E0402 08:42:59.658278 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:43:14 minikube kubelet[3236]: E0402 08:43:14.659780 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:43:26 minikube kubelet[3236]: E0402 08:43:26.656783 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:43:38 minikube kubelet[3236]: E0402 08:43:38.656937 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:43:49 minikube kubelet[3236]: E0402 08:43:49.657471 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:44:01 minikube kubelet[3236]: E0402 08:44:01.657781 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
Apr 02 08:44:17 minikube kubelet[3236]: E0402 08:44:17.235059 3236 pod_workers.go:190] Error syncing pod 4502d900-5521-11e9-94e5-080027313ef5 ("kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7967t_kube-system(4502d900-5521-11e9-94e5-080027313ef5)"
==> kubernetes-dashboard <==
2019/04/02 08:44:15 Starting overwatch
panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "secrets" in API group "" in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc420466dc0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x35e
2019/04/02 08:4gi4:15 Using in-cluster config thub.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1367500, 0xc420447d40, 0xc420447to connect to apiservd40, 0x1213a6e)
er
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x64
2019/04/02 08:44:15main.initAuthManager(0x13663e0, 0xc4202a9b00, 0xc Using service account token for csrf signing
4202f5cd8, 0x1)
2019/04/02 08:44:15 Successful initial request to the apiserver, version: v1.14.0
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/2019/04/02 08:44app/backend/dashboard.go:185 +0x1:15 Generating J2c
WE encryption key
main.main()
2019/04/02 08:44:15 New synchronizer has been registe /home/travis/build/kubernetes/dashboard/.tmp/bacred: kubernetes-dashboard-key-holder-kube-system. Starting
kend/src/github.com/kubernetes/dashboard2019/04/02 08:44:15 Starting secret synchronizer f/src/app/backend/dashboard.go:103 +0x26b
or kubernetes-dashboard-key-holder in namespace kube-system
2019/04/02 08:44:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: unexpected object: &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string][]byte{},Type:,StringData:map[string]string{},}
2019/04/02 08:44:16 Storing encryption key in a secret
==> storage-provisioner <==
minikube delete
rm -rf ~/.minikube
(β |minikube:default)β― minikube start -v 4
π minikube v1.0.0 on darwin (amd64)
π€Ή Downloading Kubernetes v1.14.0 images in the background ...
π₯ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πΏ Downloading Minikube ISO ...
142.88 MB / 142.88 MB [============================================] 100.00% 0s
Creating CA: /Users/den/.minikube/certs/ca.pem
Creating client certificate: /Users/den/.minikube/certs/cert.pem
Downloading /Users/den/.minikube/cache/boot2docker.iso from file:///Users/den/.minikube/cache/iso/minikube-v1.0.0.iso...
Creating VirtualBox VM...
Creating SSH key...
Starting the VM...
Check network to re-create if needed...
Waiting for an IP...
Setting Docker configuration on the remote daemon...
πΆ "minikube" IP address is 192.168.99.106
π³ Configuring Docker as the container runtime ...
π³ Version of container runtime is 18.06.2-ce
β Waiting for image downloads to complete ...
β¨ Preparing Kubernetes environment ...
πΎ Downloading kubeadm v1.14.0
πΎ Downloading kubelet v1.14.0
π Pulling images required by Kubernetes v1.14.0 ...
π Launching Kubernetes v1.14.0 using kubeadm ...
β Waiting for pods: apiserver
π£ Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
π https://github.com/kubernetes/minikube/issues/new
β Problems detected in "kube-addon-manager":
error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
error: no objecWRN: == Error getting default service account, retry in 0.5 second ==
error: no objects passed to apply
(β |minikube:default)β― minikube logs
==> coredns <==
.:53
2019-04-02T10:47:38.078Z [INFO] CoreDNS-1.3.1
2019-04-02T10:47:38.078Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-04-02T10:47:38.078Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
==> dmesg <==
[ +4.999887] hpet1: lost 319 rtc interrupts
[ +5.003854] hpet1: lost 319 rtc interrupts
[ +5.004727] hpet1: lost 318 rtc interrupts
[ +5.004197] hpet1: lost 318 rtc interrupts
[Apr 2 11:00] hpet1: lost 319 rtc interrupts
[ +5.001397] hpet1: lost 318 rtc interrupts
[ +5.001925] hpet1: lost 318 rtc interrupts
[ +5.001171] hpet1: lost 318 rtc interrupts
[ +5.002074] hpet1: lost 318 rtc interrupts
[ +5.002186] hpet1: lost 318 rtc interrupts
[ +5.000750] hpet1: lost 318 rtc interrupts
[ +5.001428] hpet1: lost 318 rtc interrupts
[ +5.000637] hpet1: lost 319 rtc interrupts
[ +5.001019] hpet1: lost 319 rtc interrupts
[ +5.001477] hpet1: lost 318 rtc interrupts
[ +5.002776] hpet1: lost 318 rtc interrupts
[Apr 2 11:01] hpet1: lost 318 rtc interrupts
[ +5.002108] hpet1: lost 318 rtc interrupts
[ +5.001114] hpet1: lost 318 rtc interrupts
[ +5.002380] hpet1: lost 318 rtc interrupts
[ +5.000891] hpet1: lost 318 rtc interrupts
[ +5.000966] hpet1: lost 318 rtc interrupts
[ +5.000907] hpet1: lost 318 rtc interrupts
[ +5.001465] hpet1: lost 318 rtc interrupts
[ +5.001754] hpet1: lost 319 rtc interrupts
[ +5.000727] hpet1: lost 318 rtc interrupts
[ +5.002167] hpet1: lost 318 rtc interrupts
[ +5.002463] hpet1: lost 318 rtc interrupts
[Apr 2 11:02] hpet1: lost 318 rtc interrupts
[ +5.001157] hpet1: lost 318 rtc interrupts
[ +5.000870] hpet1: lost 318 rtc interrupts
[ +5.001684] hpet1: lost 318 rtc interrupts
[ +4.999842] hpet1: lost 318 rtc interrupts
[ +5.001127] hpet1: lost 318 rtc interrupts
[ +5.001938] hpet1: lost 318 rtc interrupts
[ +5.001154] hpet1: lost 318 rtc interrupts
[ +5.001670] hpet1: lost 319 rtc interrupts
[ +5.001314] hpet1: lost 318 rtc interrupts
[ +5.001120] hpet1: lost 318 rtc interrupts
[ +5.000714] hpet1: lost 319 rtc interrupts
[Apr 2 11:03] hpet1: lost 319 rtc interrupts
[ +5.003882] hpet1: lost 318 rtc interrupts
[ +5.001501] hpet1: lost 319 rtc interrupts
[ +5.000394] hpet1: lost 319 rtc interrupts
[ +5.004676] hpet1: lost 320 rtc interrupts
[ +5.003971] hpet1: lost 318 rtc interrupts
[ +5.004555] hpet1: lost 318 rtc interrupts
[ +5.003819] hpet1: lost 318 rtc interrupts
[ +5.004059] hpet1: lost 319 rtc interrupts
[ +5.001083] hpet1: lost 319 rtc interrupts
==> kernel <==
11:03:48 up 19 min, 0 users, load average: 0.12, 0.18, 0.21
Linux minikube 4.15.0 #1 SMP Tue Mar 26 02:53:14 UTC 2019 x86_64 GNU/Linux
==> kube-addon-manager <==
INFO: == Kubernetes addon reconcile completed at 2019-04-02T10:56:37+00:00 ==
error: no objects passed to apply
error: no objects passed to apply
INFO: Leader is minikube
error: no objects passed to apply
INFO: == Kubernetes addon enserror: no objects passed to apply
ure completed aterror: no objects passed to apply
2019-error: no object04-02T10:57:36+0s passed to apply
0:00 ==
error: no objectsI passed to apply
NFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T10:57:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T10:58:36+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T10:58:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T10:59:36+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T10:59:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T11:00:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T11:00:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T11:01:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T11:01:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T11:02:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T11:02:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-02T11:03:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-02T11:03:36+00:00 ==
==> kube-apiserver <==
I0402 11:03:23.666541 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:23.666781 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:24.667263 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:24.667383 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:25.667803 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:25.667988 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:26.668269 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:26.668569 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:27.669139 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:27.669276 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:28.669867 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:28.670305 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:29.670792 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:29.670900 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:30.672035 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:30.672804 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:31.673752 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:31.674013 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:32.674206 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:32.674326 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:33.674667 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:33.674898 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:34.675628 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:34.675762 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:35.675995 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:35.676111 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:36.676276 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:36.676545 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:37.677381 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:37.677479 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:38.678865 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:38.679191 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:39.679728 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:39.680077 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:40.680389 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:40.680503 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:41.680911 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:41.681316 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:42.681550 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:42.682014 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:43.683211 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:43.683321 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:44.683874 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:44.684121 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:45.684291 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:45.684414 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:46.684954 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:46.685085 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0402 11:03:47.685494 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0402 11:03:47.685640 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
==> kube-proxy <==
W0402 10:47:36.628159 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0402 10:47:36.640353 1 server_others.go:148] Using iptables Proxier.
W0402 10:47:36.640829 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0402 10:47:36.641235 1 server_others.go:178] Tearing down inactive rules.
I0402 10:47:36.775557 1 server.go:555] Version: v1.14.0
I0402 10:47:36.784019 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0402 10:47:36.784061 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0402 10:47:36.784497 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0402 10:47:36.788146 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0402 10:47:36.788234 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0402 10:47:36.788523 1 config.go:102] Starting endpoints config controller
I0402 10:47:36.788543 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0402 10:47:36.788559 1 config.go:202] Starting service config controller
I0402 10:47:36.788568 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0402 10:47:36.889438 1 controller_utils.go:1034] Caches are synced for service config controller
I0402 10:47:36.889579 1 controller_utils.go:1034] Caches are synced for endpoints config controller
==> kube-scheduler <==
I0402 10:47:21.074314 1 serving.go:319] Generated self-signed cert in-memory
W0402 10:47:21.512309 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0402 10:47:21.512527 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0402 10:47:21.512651 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0402 10:47:21.519570 1 server.go:142] Version: v1.14.0
I0402 10:47:21.521952 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0402 10:47:21.523218 1 authorization.go:47] Authorization is disabled
W0402 10:47:21.523528 1 authentication.go:55] Authentication is disabled
I0402 10:47:21.523661 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0402 10:47:21.524003 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0402 10:47:24.944800 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0402 10:47:24.945298 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0402 10:47:24.945478 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0402 10:47:24.945823 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0402 10:47:24.945996 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0402 10:47:24.946154 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0402 10:47:24.946329 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0402 10:47:24.946562 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0402 10:47:24.946685 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0402 10:47:24.946821 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0402 10:47:25.946677 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0402 10:47:25.948550 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0402 10:47:25.950465 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0402 10:47:25.950505 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0402 10:47:25.952058 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0402 10:47:25.953254 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0402 10:47:25.955051 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0402 10:47:25.960021 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0402 10:47:25.961926 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0402 10:47:25.962891 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0402 10:47:27.828747 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0402 10:47:27.929532 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0402 10:47:27.929691 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0402 10:47:27.944561 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Tue 2019-04-02 10:44:39 UTC, end at Tue 2019-04-02 11:03:48 UTC. --
Apr 02 10:47:21 minikube kubelet[3235]: I0402 10:47:21.857317 3235 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 02 10:47:21 minikube kubelet[3235]: I0402 10:47:21.857363 3235 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 02 10:47:21 minikube kubelet[3235]: E0402 10:47:21.944974 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.045199 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.145696 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.246074 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.346225 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.446421 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.547056 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.647730 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.748264 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.848602 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:22 minikube kubelet[3235]: E0402 10:47:22.949162 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.050026 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.150670 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.251210 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.351966 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.452353 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.553257 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.655241 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.755835 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.856229 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:23 minikube kubelet[3235]: E0402 10:47:23.956554 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.057228 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.157472 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.258294 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.359802 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.460029 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.560644 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.661015 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.761534 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.862137 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.951207 3235 controller.go:194] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: E0402 10:47:24.962463 3235 kubelet.go:2244] node "minikube" not found
Apr 02 10:47:24 minikube kubelet[3235]: I0402 10:47:24.962546 3235 reconciler.go:154] Reconciler: start to sync state
Apr 02 10:47:24 minikube kubelet[3235]: I0402 10:47:24.994132 3235 kubelet_node_status.go:75] Successfully registered node minikube
Apr 02 10:47:25 minikube kubelet[3235]: E0402 10:47:25.013653 3235 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.725570 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b9d6585b-5534-11e9-9ba0-080027452775-kube-proxy") pod "kube-proxy-jc2cf" (UID: "b9d6585b-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.725735 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b9d6585b-5534-11e9-9ba0-080027452775-xtables-lock") pod "kube-proxy-jc2cf" (UID: "b9d6585b-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.725760 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b9d6585b-5534-11e9-9ba0-080027452775-lib-modules") pod "kube-proxy-jc2cf" (UID: "b9d6585b-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.725786 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-54whq" (UniqueName: "kubernetes.io/secret/b9d6585b-5534-11e9-9ba0-080027452775-kube-proxy-token-54whq") pod "kube-proxy-jc2cf" (UID: "b9d6585b-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: E0402 10:47:35.738333 3235 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Apr 02 10:47:35 minikube kubelet[3235]: E0402 10:47:35.738905 3235 reflector.go:126] object-"kube-system"/"coredns-token-w6nth": Failed to list *v1.Secret: secrets "coredns-token-w6nth" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.826141 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9e492a6-5534-11e9-9ba0-080027452775-config-volume") pod "coredns-fb8b8dccf-cjs22" (UID: "b9e492a6-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.826694 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9e612c9-5534-11e9-9ba0-080027452775-config-volume") pod "coredns-fb8b8dccf-c6lnf" (UID: "b9e612c9-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.826777 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-w6nth" (UniqueName: "kubernetes.io/secret/b9e612c9-5534-11e9-9ba0-080027452775-coredns-token-w6nth") pod "coredns-fb8b8dccf-c6lnf" (UID: "b9e612c9-5534-11e9-9ba0-080027452775")
Apr 02 10:47:35 minikube kubelet[3235]: I0402 10:47:35.826966 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-w6nth" (UniqueName: "kubernetes.io/secret/b9e492a6-5534-11e9-9ba0-080027452775-coredns-token-w6nth") pod "coredns-fb8b8dccf-cjs22" (UID: "b9e492a6-5534-11e9-9ba0-080027452775")
Apr 02 10:47:37 minikube kubelet[3235]: I0402 10:47:37.642532 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/bafa3b90-5534-11e9-9ba0-080027452775-tmp") pod "storage-provisioner" (UID: "bafa3b90-5534-11e9-9ba0-080027452775")
Apr 02 10:47:37 minikube kubelet[3235]: I0402 10:47:37.642577 3235 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vcvhm" (UniqueName: "kubernetes.io/secret/bafa3b90-5534-11e9-9ba0-080027452775-storage-provisioner-token-vcvhm") pod "storage-provisioner" (UID: "bafa3b90-5534-11e9-9ba0-080027452775")
Apr 02 10:47:37 minikube kubelet[3235]: W0402 10:47:37.856013 3235 container.go:409] Failed to create summary reader for "/system.slice/run-r045ca6419cae423e8aed4642d69b02cc.scope": none of the resources are being tracked.
==> storage-provisioner <==
Closing in favor of #4045.
Any chance there is a firewall or proxy in use on this machine?
@tstromberg nope, no firewall
couple more details:
minikube installed via homebrew
brew cask install minikube
And one strange thing. Minikube ISO download step always hangs somewhere around first 7-9MB, and after minute continues normally without any pauses.
UPD: somehow it just magically worked
(β |xxxxx:default)β― minikube start
π minikube v1.0.0 on darwin (amd64)
π€Ή Downloading Kubernetes v1.14.0 images in the background ...
π₯ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πΏ Downloading Minikube ISO ...
142.88 MB / 142.88 MB [============================================] 100.00% 0s
πΆ "minikube" IP address is 192.168.99.107
π³ Configuring Docker as the container runtime ...
π³ Version of container runtime is 18.06.2-ce
β Waiting for image downloads to complete ...
β¨ Preparing Kubernetes environment ...
πΎ Downloading kubeadm v1.14.0
πΎ Downloading kubelet v1.14.0
π Pulling images required by Kubernetes v1.14.0 ...
π Launching Kubernetes v1.14.0 using kubeadm ...
β Waiting for pods: apiserver proxy etcd scheduler controller dns
π Configuring cluster permissions ...
π€ Verifying component health .....
π kubectl is now configured to use "minikube"
π Done! Thank you for using minikube!
I noticed to have this problem after running
minikube start --cpus 2 --memory 4096 --kubernetes-version v1.14.0 --keep-context
with another context in ~/.kube/config set as current
After changing to minikube the problem gone
Seems like the problem is with VirtualBox, It worked for me when using hyperkit-driver
Installing the hyperkit-driver worked for me after deleting the VirtualBox one
Followed:
https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver
Steps:
minikube delete
minikube start --vm-driver hyperkit
If the above works for you, then make it default launcher for minikube
minikube config set vm-driver hyperkit
minikube delete
minikube start
It works for me ! Thank you ! :)
I believe this issue was resolved in the v1.1.0 release. Please try upgrading to the latest release of minikube and run minikube delete to remove the previous cluster state.
If the same issue occurs, please re-open this bug. Thank you opening this bug report, and for your patience!
I had the same problem and by adjusting my system proxy and let the kubectl have a direct connection, it solved.
Most helpful comment
Seems like the problem is with VirtualBox, It worked for me when using hyperkit-driver
Installing the hyperkit-driver worked for me after deleting the VirtualBox one
Followed:
https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver
Steps:
minikube deleteminikube start --vm-driver hyperkitIf the above works for you, then make it default launcher for minikube
minikube config set vm-driver hyperkitminikube deleteminikube start