minikube keeps crashing many times during the day

Created on 25 Jun 2019  路  4Comments  路  Source: kubernetes/minikube

Many times a day minikube status returns:
```minikube status
host: Running
kubelet: Running
apiserver: Error
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

minikube logs:

==> coredns <==
.:53
2019-06-25T15:56:05.180Z [INFO] CoreDNS-1.3.1
2019-06-25T15:56:05.181Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-25T15:56:05.181Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> dmesg <==
[Jun25 15:59] hpet1: lost 1376 rtc interrupts
[ +2.000521] hpet1: lost 1375 rtc interrupts
[ +2.305712] hpet1: lost 1375 rtc interrupts
[ +1.204645] hpet1: lost 1375 rtc interrupts
[ +1.824341] hpet1: lost 1375 rtc interrupts
[ +0.845610] hpet1: lost 1375 rtc interrupts
[ +0.044092] hpet1: lost 1 rtc interrupts
[ +0.814401] hpet1: lost 1375 rtc interrupts
[ +2.184487] hpet1: lost 1378 rtc interrupts
[ +1.319253] hpet1: lost 1377 rtc interrupts
[ +1.918794] hpet1: lost 1375 rtc interrupts
[ +0.861470] hpet1: lost 1375 rtc interrupts
[ +1.453364] hpet1: lost 1375 rtc interrupts
[ +0.996533] hpet1: lost 1375 rtc interrupts
[ +1.864348] hpet1: lost 1375 rtc interrupts
[ +0.728241] hpet1: lost 1375 rtc interrupts
[ +0.862602] hpet1: lost 1375 rtc interrupts
[ +1.983386] hpet_rtc_timer_reinit: 2 callbacks suppressed
[ +0.000002] hpet1: lost 1375 rtc interrupts
[ +0.781398] hpet1: lost 1375 rtc interrupts
[ +0.823582] hpet1: lost 1376 rtc interrupts
[ +0.708844] hpet1: lost 1376 rtc interrupts
[ +0.833291] hpet1: lost 1375 rtc interrupts
[ +0.822263] hpet1: lost 1375 rtc interrupts
[ +0.603210] hpet1: lost 1376 rtc interrupts
[ +0.592253] hpet1: lost 1375 rtc interrupts
[ +0.954986] hpet1: lost 1375 rtc interrupts
[ +0.787187] hpet1: lost 1375 rtc interrupts
[ +0.821440] hpet1: lost 1376 rtc interrupts
[ +0.594183] hpet1: lost 1376 rtc interrupts
[ +0.558452] hpet1: lost 1375 rtc interrupts
[ +0.614596] hpet1: lost 1375 rtc interrupts
[ +1.098651] hpet1: lost 1375 rtc interrupts
[ +0.635684] hpet1: lost 1376 rtc interrupts
[ +0.663571] hpet1: lost 1375 rtc interrupts
[ +0.660055] hpet1: lost 1375 rtc interrupts
[ +0.539808] hpet1: lost 1375 rtc interrupts
[ +0.705585] hpet1: lost 1375 rtc interrupts
[ +0.699599] hpet1: lost 1375 rtc interrupts
[ +0.849374] hpet1: lost 1375 rtc interrupts
[ +0.703022] hpet1: lost 1376 rtc interrupts
[ +0.868156] hpet1: lost 1375 rtc interrupts
[ +0.699218] hpet1: lost 1375 rtc interrupts
[ +0.686269] hpet1: lost 1375 rtc interrupts
[ +0.833120] hpet1: lost 1376 rtc interrupts
[ +0.636173] hpet1: lost 1376 rtc interrupts
[ +0.768881] hpet1: lost 1376 rtc interrupts
[ +0.721899] hpet1: lost 1375 rtc interrupts
[ +0.832816] hpet1: lost 1375 rtc interrupts
[ +0.655291] hpet1: lost 1376 rtc interrupts

==> kernel <==
15:59:47 up 3:05, 0 users, load average: 11.93, 17.08, 14.45
Linux minikube 4.15.0 #1 SMP Tue May 21 00:14:40 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Kubernetes addon reconcile completed at 2019-06-25T15:55:46+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T15:56:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.extensions/default-http-backend unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T15:56:21+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T15:57:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.extensions/default-http-backend unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T15:57:20+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T15:58:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.extensions/default-http-backend unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T15:58:21+00:00 ==
INFO: Leader is minikube
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["kind":"ServiceAccount" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "apiVersion":"v1"]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["containers":[map["name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]] "command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent"]] "hostNetwork":%!!(MISSING)q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]]]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply

==> kube-apiserver <==
Trace[1697218957]: [702.006319ms] [537.163127ms] Transaction prepared
I0625 15:59:40.650288 1 trace.go:81] Trace[948114601]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-06-25 15:59:38.027919451 +0000 UTC m=+230.466627361) (total time: 2.535740862s):
Trace[948114601]: [2.493302966s] [2.467438834s] Transaction committed
I0625 15:59:40.650707 1 trace.go:81] Trace[1099150483]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-06-25 15:59:37.98585011 +0000 UTC m=+230.424558064) (total time: 2.604022726s):
Trace[1099150483]: [2.603973778s] [2.535827758s] Transaction committed
I0625 15:59:40.729989 1 trace.go:81] Trace[363280048]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-06-25 15:59:37.963268268 +0000 UTC m=+230.401976192) (total time: 2.766675172s):
Trace[363280048]: [2.69013651s] [2.672941096s] Object stored in database
I0625 15:59:40.731474 1 trace.go:81] Trace[557985559]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-06-25 15:59:37.928497908 +0000 UTC m=+230.367205849) (total time: 2.802937939s):
Trace[557985559]: [2.723020352s] [2.623668467s] Object stored in database
I0625 15:59:40.894170 1 trace.go:81] Trace[1812983047]: "Get /api/v1/persistentvolumes/pvc-e501ed30-9347-11e9-a43c-0800272bdd47" (started: 2019-06-25 15:59:40.221543399 +0000 UTC m=+232.660251401) (total time: 672.57448ms):
Trace[1812983047]: [654.451485ms] [654.429482ms] About to write a response
I0625 15:59:40.974664 1 trace.go:81] Trace[622208187]: "Get /api/v1/persistentvolumes/pvc-ffad0792-9722-11e9-80eb-0800272bdd47" (started: 2019-06-25 15:59:40.244930151 +0000 UTC m=+232.683638063) (total time: 729.697612ms):
Trace[622208187]: [622.598961ms] [622.582299ms] About to write a response
I0625 15:59:40.978426 1 trace.go:81] Trace[66841559]: "Get /api/v1/namespaces/monitoring/services/prometheus-operated" (started: 2019-06-25 15:59:38.564423464 +0000 UTC m=+231.003131395) (total time: 2.413960514s):
Trace[66841559]: [2.208251782s] [2.20823239s] About to write a response
Trace[66841559]: [2.413957973s] [205.706191ms] Transformed response object
I0625 15:59:41.860758 1 trace.go:81] Trace[774622823]: "List etcd3: key=/jobs, resourceVersion=, limit: 500, continue: " (started: 2019-06-25 15:59:40.164489442 +0000 UTC m=+232.603197406) (total time: 1.647294041s):
Trace[774622823]: [1.647294041s] [1.647294041s] END
I0625 15:59:41.966720 1 trace.go:81] Trace[2074760567]: "List /apis/batch/v1/jobs" (started: 2019-06-25 15:59:40.16243726 +0000 UTC m=+232.601145249) (total time: 1.8032364s):
Trace[2074760567]: [1.699781689s] [1.699021552s] Listing from storage done
I0625 15:59:49.140733 1 trace.go:81] Trace[1647032560]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-06-25 15:59:44.569474901 +0000 UTC m=+237.008182826) (total time: 4.569707764s):
Trace[1647032560]: [3.963762201s] [3.890347319s] About to write a response
Trace[1647032560]: [4.569705129s] [605.942928ms] Transformed response object
I0625 15:59:49.140927 1 trace.go:81] Trace[1365491802]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-06-25 15:59:46.453940048 +0000 UTC m=+238.892648007) (total time: 2.685597935s):
Trace[1365491802]: [155.596749ms] [155.596749ms] About to Get from storage
Trace[1365491802]: [2.138012581s] [1.982415832s] About to write a response
Trace[1365491802]: [2.68559633s] [547.583749ms] Transformed response object
I0625 15:59:49.178389 1 trace.go:81] Trace[1341247346]: "Get /api/v1/namespaces/kube-system/pods/nginx-ingress-controller-586cdc477c-fxsch" (started: 2019-06-25 15:59:44.648675763 +0000 UTC m=+237.087383677) (total time: 4.529668353s):
Trace[1341247346]: [3.930970673s] [3.930951591s] About to write a response
Trace[1341247346]: [4.52966602s] [598.695347ms] Transformed response object
I0625 15:59:49.179454 1 trace.go:81] Trace[1358559519]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-06-25 15:59:46.455725627 +0000 UTC m=+238.894433591) (total time: 2.535818967s):
Trace[1358559519]: [154.19051ms] [154.19051ms] About to Get from storage
Trace[1358559519]: [2.121689331s] [1.967498821s] About to write a response
Trace[1358559519]: [2.535816084s] [414.126753ms] Transformed response object
I0625 15:59:49.303697 1 trace.go:81] Trace[1176343069]: "Get /api/v1/persistentvolumes/pvc-e501ed30-9347-11e9-a43c-0800272bdd47" (started: 2019-06-25 15:59:44.501440619 +0000 UTC m=+236.940148568) (total time: 4.42518821s):
Trace[1176343069]: [4.425086927s] [4.424225659s] About to write a response
I0625 15:59:49.327388 1 trace.go:81] Trace[1560008504]: "Get /api/v1/persistentvolumes/pvc-ffad0792-9722-11e9-80eb-0800272bdd47" (started: 2019-06-25 15:59:44.503156895 +0000 UTC m=+236.941864804) (total time: 4.464597813s):
Trace[1560008504]: [4.246970218s] [4.246950739s] About to write a response
Trace[1560008504]: [4.464589257s] [217.619039ms] Transformed response object
I0625 15:59:49.880643 1 trace.go:81] Trace[884467717]: "Get /api/v1/namespaces/default" (started: 2019-06-25 15:59:43.949658181 +0000 UTC m=+236.388366138) (total time: 4.932011231s):
Trace[884467717]: [4.753964303s] [4.699914459s] About to write a response
Trace[884467717]: [4.932007132s] [178.042829ms] Transformed response object
I0625 15:59:50.690354 1 trace.go:81] Trace[509606916]: "List etcd3: key=/cronjobs, resourceVersion=, limit: 500, continue: " (started: 2019-06-25 15:59:45.293999358 +0000 UTC m=+237.732707281) (total time: 5.380139275s):
Trace[509606916]: [5.380139275s] [5.380139275s] END
I0625 15:59:50.911513 1 trace.go:81] Trace[893883711]: "Get /api/v1/nodes/minikube" (started: 2019-06-25 15:59:49.886340102 +0000 UTC m=+242.325048059) (total time: 1.025130094s):
Trace[893883711]: [709.850965ms] [709.850965ms] About to Get from storage
Trace[893883711]: [1.025126853s] [314.659585ms] Transformed response object
I0625 15:59:50.976184 1 trace.go:81] Trace[218787438]: "List /apis/batch/v1beta1/cronjobs" (started: 2019-06-25 15:59:45.21219696 +0000 UTC m=+237.650904918) (total time: 5.763941025s):
Trace[218787438]: [5.478307356s] [5.461431147s] Listing from storage done
Trace[218787438]: [5.763937544s] [285.630188ms] Writing http response done (1 items)

==> kube-proxy <==
I0625 13:27:53.519383 1 trace.go:81] Trace[928891287]: "iptables save" (started: 2019-06-25 13:27:51.099854149 +0000 UTC m=+1919.733333830) (total time: 2.397597343s):
Trace[928891287]: [2.397597343s] [2.397597343s] END
I0625 13:27:56.292526 1 trace.go:81] Trace[1239399020]: "iptables save" (started: 2019-06-25 13:27:53.809467354 +0000 UTC m=+1922.442947033) (total time: 2.253136253s):
Trace[1239399020]: [2.253136253s] [2.253136253s] END
I0625 13:28:03.924703 1 trace.go:81] Trace[615836568]: "iptables restore" (started: 2019-06-25 13:27:56.572401519 +0000 UTC m=+1925.205881200) (total time: 6.531082023s):
Trace[615836568]: [6.531082023s] [6.526053269s] END
I0625 14:17:30.382399 1 trace.go:81] Trace[1967713]: "iptables restore" (started: 2019-06-25 14:17:28.3317055 +0000 UTC m=+4896.965185201) (total time: 2.019944268s):
Trace[1967713]: [2.019944268s] [1.817025321s] END
I0625 14:39:08.770648 1 trace.go:81] Trace[1267354265]: "iptables restore" (started: 2019-06-25 14:39:06.384804393 +0000 UTC m=+6195.018284047) (total time: 2.024610989s):
Trace[1267354265]: [2.024610989s] [2.02282627s] END
I0625 14:55:55.813607 1 trace.go:81] Trace[2005377802]: "iptables restore" (started: 2019-06-25 14:55:53.262516409 +0000 UTC m=+7201.895996185) (total time: 2.429156315s):
Trace[2005377802]: [2.429156315s] [2.424147629s] END
I0625 14:56:54.750974 1 trace.go:81] Trace[133167841]: "iptables restore" (started: 2019-06-25 14:56:52.641694844 +0000 UTC m=+7261.275174507) (total time: 2.048988444s):
Trace[133167841]: [2.048988444s] [1.994950287s] END
I0625 15:04:05.706352 1 trace.go:81] Trace[421411560]: "iptables restore" (started: 2019-06-25 15:04:02.256054873 +0000 UTC m=+7690.889534554) (total time: 3.443047055s):
Trace[421411560]: [3.443047055s] [3.263031497s] END
I0625 15:05:12.337803 1 trace.go:81] Trace[1446457944]: "iptables save" (started: 2019-06-25 15:05:02.003443133 +0000 UTC m=+7750.636922809) (total time: 10.247392704s):
Trace[1446457944]: [10.247392704s] [10.247392704s] END
I0625 15:17:49.056182 1 trace.go:81] Trace[1447157092]: "iptables restore" (started: 2019-06-25 15:17:44.627984948 +0000 UTC m=+8513.261464672) (total time: 3.958633709s):
Trace[1447157092]: [3.958633709s] [3.909984007s] END
I0625 15:46:40.525393 1 trace.go:81] Trace[1773511752]: "iptables save" (started: 2019-06-25 15:46:37.577017727 +0000 UTC m=+10246.210497435) (total time: 2.313826187s):
Trace[1773511752]: [2.313826187s] [2.313826187s] END
I0625 15:49:58.549400 1 trace.go:81] Trace[415317482]: "iptables save" (started: 2019-06-25 15:49:54.712706679 +0000 UTC m=+10443.346186371) (total time: 3.743758365s):
Trace[415317482]: [3.743758365s] [3.743758365s] END
I0625 15:50:02.672985 1 trace.go:81] Trace[2017613691]: "iptables restore" (started: 2019-06-25 15:49:59.913501765 +0000 UTC m=+10448.546981412) (total time: 2.470432683s):
Trace[2017613691]: [2.470432683s] [2.466858512s] END
I0625 15:50:44.770472 1 trace.go:81] Trace[1412863635]: "iptables save" (started: 2019-06-25 15:50:38.958478906 +0000 UTC m=+10487.591958613) (total time: 5.451268632s):
Trace[1412863635]: [5.451268632s] [5.451268632s] END
I0625 15:50:53.561023 1 trace.go:81] Trace[1099424808]: "iptables save" (started: 2019-06-25 15:50:51.038104899 +0000 UTC m=+10499.671584568) (total time: 2.187016857s):
Trace[1099424808]: [2.187016857s] [2.187016857s] END
I0625 15:52:19.703782 1 trace.go:81] Trace[487861202]: "iptables save" (started: 2019-06-25 15:52:11.822065108 +0000 UTC m=+10580.455544820) (total time: 6.473670716s):
Trace[487861202]: [6.473670716s] [6.473670716s] END
I0625 15:52:22.411666 1 trace.go:81] Trace[1653311949]: "iptables save" (started: 2019-06-25 15:52:20.045978247 +0000 UTC m=+10588.679457977) (total time: 2.353493093s):
Trace[1653311949]: [2.353493093s] [2.353493093s] END
I0625 15:52:29.381792 1 trace.go:81] Trace[199931803]: "iptables restore" (started: 2019-06-25 15:52:22.667632919 +0000 UTC m=+10591.301112595) (total time: 5.745993896s):
Trace[199931803]: [5.745993896s] [5.487996542s] END
E0625 15:54:17.405678 1 reflector.go:283] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1407381&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:17.405919 1 reflector.go:283] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1407151&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:19.774713 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:19.957120 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:20.834483 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:20.967563 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:21.835205 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:54:21.969877 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:55:45.416567 1 reflector.go:283] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1407517&timeout=7m38s&timeoutSeconds=458&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:55:45.416841 1 reflector.go:283] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1407433&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:55:46.438618 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:55:46.438933 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:55:47.439357 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0625 15:55:47.443534 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler <==
I0625 15:56:26.028850 1 serving.go:319] Generated self-signed cert in-memory
W0625 15:56:26.537148 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0625 15:56:26.537266 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0625 15:56:26.537273 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0625 15:56:26.537290 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0625 15:56:26.537308 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0625 15:56:26.547426 1 server.go:142] Version: v1.15.0
I0625 15:56:26.547471 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0625 15:56:26.548238 1 authorization.go:47] Authorization is disabled
W0625 15:56:26.548411 1 authentication.go:55] Authentication is disabled
I0625 15:56:26.548582 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0625 15:56:26.549714 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0625 15:56:27.465774 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
I0625 15:56:43.273680 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
I0625 15:59:54.767353 1 leaderelection.go:281] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
E0625 15:59:55.406997 1 server.go:254] lost master
lost lease

==> kubelet <==
-- Logs begin at Tue 2019-06-25 12:54:27 UTC, end at Tue 2019-06-25 15:59:57 UTC. --
Jun 25 15:55:52 minikube kubelet[3344]: E0625 15:55:52.053109 3344 reflector.go:125] object-"monitoring"/"default-token-tdtgt": Failed to list *v1.Secret: secrets "default-token-tdtgt" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "monitoring": no relationship found between node "minikube" and this object
Jun 25 15:55:52 minikube kubelet[3344]: E0625 15:55:52.053279 3344 reflector.go:125] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jun 25 15:55:52 minikube kubelet[3344]: E0625 15:55:52.450780 3344 pod_workers.go:190] Error syncing pod a2fb3394-8c67-11e9-a6ce-0800272bdd47 ("ingest-0_mission-library-dev-main(a2fb3394-8c67-11e9-a6ce-0800272bdd47)"), skipping: failed to "StartContainer" for "ingest" with CrashLoopBackOff: "Back-off 20s restarting failed container=ingest pod=ingest-0_mission-library-dev-main(a2fb3394-8c67-11e9-a6ce-0800272bdd47)"
Jun 25 15:55:54 minikube kubelet[3344]: E0625 15:55:54.005704 3344 pod_workers.go:190] Error syncing pod ac391bf5-7528-4dde-9218-22c0bc69a544 ("coredns-5c98db65d4-lm4hm_kube-system(ac391bf5-7528-4dde-9218-22c0bc69a544)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-5c98db65d4-lm4hm_kube-system(ac391bf5-7528-4dde-9218-22c0bc69a544)"
Jun 25 15:55:56 minikube kubelet[3344]: E0625 15:55:56.837079 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:55:58 minikube kubelet[3344]: E0625 15:55:58.836716 3344 pod_workers.go:190] Error syncing pod 31d9ee8b7fb12e797dc981a8686f6b2b ("kube-scheduler-minikube_kube-system(31d9ee8b7fb12e797dc981a8686f6b2b)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(31d9ee8b7fb12e797dc981a8686f6b2b)"
Jun 25 15:55:59 minikube kubelet[3344]: E0625 15:55:59.104558 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:55:59 minikube kubelet[3344]: E0625 15:55:59.599835 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:00 minikube kubelet[3344]: E0625 15:56:00.655718 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:04 minikube kubelet[3344]: E0625 15:56:04.837598 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:56:05 minikube kubelet[3344]: E0625 15:56:05.838341 3344 pod_workers.go:190] Error syncing pod a2fb3394-8c67-11e9-a6ce-0800272bdd47 ("ingest-0_mission-library-dev-main(a2fb3394-8c67-11e9-a6ce-0800272bdd47)"), skipping: failed to "StartContainer" for "ingest" with CrashLoopBackOff: "Back-off 20s restarting failed container=ingest pod=ingest-0_mission-library-dev-main(a2fb3394-8c67-11e9-a6ce-0800272bdd47)"
Jun 25 15:56:09 minikube kubelet[3344]: E0625 15:56:09.837710 3344 pod_workers.go:190] Error syncing pod 31d9ee8b7fb12e797dc981a8686f6b2b ("kube-scheduler-minikube_kube-system(31d9ee8b7fb12e797dc981a8686f6b2b)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(31d9ee8b7fb12e797dc981a8686f6b2b)"
Jun 25 15:56:09 minikube kubelet[3344]: E0625 15:56:09.838319 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:15 minikube kubelet[3344]: E0625 15:56:15.839324 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:19 minikube kubelet[3344]: E0625 15:56:19.837205 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:56:21 minikube kubelet[3344]: E0625 15:56:21.839885 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:30 minikube kubelet[3344]: E0625 15:56:30.837524 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:31 minikube kubelet[3344]: E0625 15:56:31.838556 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:56:32 minikube kubelet[3344]: E0625 15:56:32.836844 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:43 minikube kubelet[3344]: E0625 15:56:43.836976 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:43 minikube kubelet[3344]: E0625 15:56:43.836979 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:56:47 minikube kubelet[3344]: E0625 15:56:47.837691 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:54 minikube kubelet[3344]: E0625 15:56:54.839411 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:56:55 minikube kubelet[3344]: E0625 15:56:55.838627 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:56:58 minikube kubelet[3344]: E0625 15:56:58.840018 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:08 minikube kubelet[3344]: E0625 15:57:08.844468 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:57:08 minikube kubelet[3344]: E0625 15:57:08.845832 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:12 minikube kubelet[3344]: E0625 15:57:12.838262 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:19 minikube kubelet[3344]: E0625 15:57:19.838927 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:22 minikube kubelet[3344]: E0625 15:57:22.839775 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:57:26 minikube kubelet[3344]: E0625 15:57:26.837357 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:33 minikube kubelet[3344]: E0625 15:57:33.838571 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:57:34 minikube kubelet[3344]: E0625 15:57:34.838812 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:37 minikube kubelet[3344]: E0625 15:57:37.843984 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:45 minikube kubelet[3344]: E0625 15:57:45.841480 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:48 minikube kubelet[3344]: E0625 15:57:48.837972 3344 pod_workers.go:190] Error syncing pod 676a8a1e3e146d0c0f7c4f6e1e96b578 ("kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(676a8a1e3e146d0c0f7c4f6e1e96b578)"
Jun 25 15:57:52 minikube kubelet[3344]: E0625 15:57:52.842837 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:57:59 minikube kubelet[3344]: E0625 15:57:59.839163 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:05 minikube kubelet[3344]: E0625 15:58:05.975568 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:07 minikube kubelet[3344]: E0625 15:58:07.032842 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:13 minikube kubelet[3344]: E0625 15:58:13.838637 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:21 minikube kubelet[3344]: E0625 15:58:21.838728 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:26 minikube kubelet[3344]: E0625 15:58:26.840791 3344 pod_workers.go:190] Error syncing pod 43d0c55c-8949-11e9-ae4b-0800272bdd47 ("nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-586cdc477c-fxsch_kube-system(43d0c55c-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:32 minikube kubelet[3344]: E0625 15:58:32.841827 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:44 minikube kubelet[3344]: E0625 15:58:44.837751 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:58:58 minikube kubelet[3344]: E0625 15:58:58.841963 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:59:12 minikube kubelet[3344]: E0625 15:59:12.873861 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:59:25 minikube kubelet[3344]: E0625 15:59:25.839850 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:59:39 minikube kubelet[3344]: E0625 15:59:39.848457 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"
Jun 25 15:59:54 minikube kubelet[3344]: E0625 15:59:54.486173 3344 pod_workers.go:190] Error syncing pod 4a35bf90-8949-11e9-ae4b-0800272bdd47 ("prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"), skipping: failed to "StartContainer" for "prometheus" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=prometheus pod=prometheus-main-0_monitoring(4a35bf90-8949-11e9-ae4b-0800272bdd47)"

==> storage-provisioner <==
E0625 14:55:19.507211 1 controller.go:682] Error watching for provisioning success, can't provision for claim "pigeon-dev/postgres-data": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "pigeon-dev"
E0625 15:11:00.250184 1 controller.go:682] Error watching for provisioning success, can't provision for claim "pigeon-dev/postgres-data": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "pigeon-dev"
E0625 15:43:58.268179 1 controller.go:682] Error watching for provisioning success, can't provision for claim "pigeon-dev/postgres-data": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "pigeon-dev"
E0625 15:48:08.815877 1 controller.go:682] Error watching for provisioning success, can't provision for claim "pigeon-dev/postgres-data": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "pigeon-dev"
E0625 15:49:27.948353 1 controller.go:682] Error watching for provisioning success, can't provision for claim "pigeon-dev/postgres-data": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "pigeon-dev"
E0625 15:54:16.742506 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2337, ErrCode=NO_ERROR, debug=""
E0625 15:54:16.742584 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2337, ErrCode=NO_ERROR, debug=""
E0625 15:54:17.078079 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2337, ErrCode=NO_ERROR, debug=""
E0625 15:54:17.368157 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to watch *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=1407156&timeoutSeconds=375&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:17.369988 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to watch *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=1407276&timeoutSeconds=493&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:17.448183 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to watch *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=1392833&timeoutSeconds=529&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:19.198562 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to list *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:19.238653 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:19.238735 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:20.709376 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to list *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:20.709475 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:20.709518 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:21.715420 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to list *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:21.718876 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:54:21.722914 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:45.360033 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to watch *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=1407433&timeoutSeconds=402&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:45.360345 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to watch *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=1407433&timeoutSeconds=500&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:45.360445 1 reflector.go:315] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to watch *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=1407433&timeoutSeconds=535&watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:46.362367 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:46.371195 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:46.394911 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to list *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:47.363439 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: Get https://10.96.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:47.371924 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:412: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
E0625 15:55:47.396278 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:379: Failed to list *v1.StorageClass: Get https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused
```

OS version: ubuntu 16.04

minikube version: v1.2.0

triagneeds-information

Most helpful comment

I have deleted my minikube image and started with minikube start --vm-driver --memory 4096 and it seemed to solve the issue. I would recommend making debugging this a bit easier.

Also, after starting minikube with a given memory value, if later one tries to set a parameter via the command line, minikube it spits out a message saying that it won't be able to change the memory parameter, which is fair enough, but it is very easy to miss that message and assume that the configuration applied, so perhaps it would be interesting to do something more robust, like stopping the initialization until some user action.

All 4 comments

do you mind sharing how much ram you have and what driver and options did you use to start minikube?

Laptop has 16GB of RAM.

I start minikube with minikube start --vm-driver virtualbox --cpus 8

I also get this:

minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

I have deleted my minikube image and started with minikube start --vm-driver --memory 4096 and it seemed to solve the issue. I would recommend making debugging this a bit easier.

Also, after starting minikube with a given memory value, if later one tries to set a parameter via the command line, minikube it spits out a message saying that it won't be able to change the memory parameter, which is fair enough, but it is very easy to miss that message and assume that the configuration applied, so perhaps it would be interesting to do something more robust, like stopping the initialization until some user action.

Was this page helpful?
0 / 5 - 0 ratings