Minikube: 0.19.1: The connection to the server 192.168.99.118:8443 was refused - did you specify the right host or port?

Created on 1 Jun 2017  路  14Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Minikube version (use minikube version):

0.19.1

Environment:

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="17.04 (Zesty Zapus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 17.04"
VERSION_ID="17.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=zesty
UBUNTU_CODENAME=zesty
asaha@asaha-desktop:~
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName):

virtualbox

  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):

v0.18.0

  • Install tools:
  • Others:

What happened:

Downloaded minikuber 0.19.1:

$ minikube start
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Downloading Minikube ISO
 89.51 MB / 89.51 MB [==============================================] 100.00% 0s
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.

Then:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 192.168.99.118:8443 was refused - did you specify the right host or port?

What you expected to happen:

No error above.

How to reproduce it (as minimally and precisely as possible):

As above.

Anything else do we need to know:

Also reported earlier in #1498

kinbug lifecyclrotten

Most helpful comment

Hi, you may have an outdated kubectl config. Try to run kubectl config delete-context minikube. Optionally run rm -Rf ~/.minikube to start clean sheet.

All 14 comments

I have the same issue, with MacOS using the brew cask install method

(Closed by mistake)

Hi, you may have an outdated kubectl config. Try to run kubectl config delete-context minikube. Optionally run rm -Rf ~/.minikube to start clean sheet.

Can you provide minikube logs?

Ping, the output of "minikube logs" would help.

(Sorry about the delay)

@woutor Thanks. Tried that, didn't help.

@dlorenc @r2d4 Sure here are the logs:

-- Logs begin at Tue 2017-06-06 23:52:48 UTC, end at Tue 2017-06-06 23:55:45 UTC. --
Jun 06 23:53:05 minikube systemd[1]: Starting Localkube...
Jun 06 23:53:05 minikube localkube[3289]: name = kubeetcd
Jun 06 23:53:05 minikube localkube[3289]: data dir = /var/lib/localkube/etcd
Jun 06 23:53:05 minikube localkube[3289]: member dir = /var/lib/localkube/etcd/member
Jun 06 23:53:05 minikube localkube[3289]: heartbeat = 100ms
Jun 06 23:53:05 minikube localkube[3289]: election = 1000ms
Jun 06 23:53:05 minikube localkube[3289]: snapshot count = 10000
Jun 06 23:53:05 minikube localkube[3289]: advertise client URLs = http://0.0.0.0:2379
Jun 06 23:53:05 minikube localkube[3289]: initial advertise peer URLs = http://0.0.0.0:2380
Jun 06 23:53:05 minikube localkube[3289]: initial cluster = kubeetcd=http://0.0.0.0:2380
Jun 06 23:53:05 minikube localkube[3289]: starting member fcf2ad36debdd5bb in cluster 7f055ae3b0912328
Jun 06 23:53:05 minikube localkube[3289]: fcf2ad36debdd5bb became follower at term 0
Jun 06 23:53:05 minikube localkube[3289]: newRaft fcf2ad36debdd5bb [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Jun 06 23:53:05 minikube localkube[3289]: fcf2ad36debdd5bb became follower at term 1
Jun 06 23:53:05 minikube localkube[3289]: starting server... [version: 3.0.17, cluster version: to_be_decided]
Jun 06 23:53:05 minikube localkube[3289]: localkube host ip address: 10.0.2.15
Jun 06 23:53:05 minikube localkube[3289]: added member fcf2ad36debdd5bb [http://0.0.0.0:2380] to cluster 7f055ae3b0912328
Jun 06 23:53:05 minikube localkube[3289]: Starting apiserver...
Jun 06 23:53:05 minikube localkube[3289]: Waiting for apiserver to be healthy...
Jun 06 23:53:05 minikube localkube[3289]: W0606 23:53:05.825829    3289 authentication.go:362] AnonymousAuth is not allowed with the AllowAll authorizer.  Resetting AnonymousAuth to false. You should use a different authorizer
Jun 06 23:53:06 minikube localkube[3289]: fcf2ad36debdd5bb is starting a new election at term 1
Jun 06 23:53:06 minikube localkube[3289]: fcf2ad36debdd5bb became candidate at term 2
Jun 06 23:53:06 minikube localkube[3289]: fcf2ad36debdd5bb received vote from fcf2ad36debdd5bb at term 2
Jun 06 23:53:06 minikube localkube[3289]: fcf2ad36debdd5bb became leader at term 2
Jun 06 23:53:06 minikube localkube[3289]: raft.node: fcf2ad36debdd5bb elected leader fcf2ad36debdd5bb at term 2
Jun 06 23:53:06 minikube localkube[3289]: published {Name:kubeetcd ClientURLs:[http://0.0.0.0:2379]} to cluster 7f055ae3b0912328
Jun 06 23:53:06 minikube localkube[3289]: setting up the initial cluster version to 3.0
Jun 06 23:53:06 minikube localkube[3289]: set the initial cluster version to 3.0
Jun 06 23:53:06 minikube localkube[3289]: enabled capabilities for version 3.0
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.518603    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.522492    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523087    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523310    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523442    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523557    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523673    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523844    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.523977    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.524125    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.524238    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.524421    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.524860    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.525590    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.525813    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.525912    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.536471    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.536524    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.536804    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.537257    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.559471    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.560320    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.561361    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.561653    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.561820    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.562580    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.564067    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.564427    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.564768    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.565126    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.565332    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.565543    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.565717    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.565875    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.565998    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.566205    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.566283    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.566331    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.566385    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.566444    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.567690    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.567812    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.567883    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: W0606 23:53:06.567961    3289 storage_codec.go:52] storage type "etcd2" does not support media type "application/vnd.kubernetes.protobuf", using "application/json"
Jun 06 23:53:06 minikube localkube[3289]: E0606 23:53:06.628826    3289 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:8443/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jun 06 23:53:06 minikube localkube[3289]: E0606 23:53:06.628904    3289 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:8443/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jun 06 23:53:06 minikube localkube[3289]: E0606 23:53:06.628943    3289 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:8443/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt:connection refused
Jun 06 23:53:06 minikube localkube[3289]: E0606 23:53:06.629314    3289 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jun 06 23:53:06 minikube localkube[3289]: E0606 23:53:06.629807    3289 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:8443/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jun 06 23:53:06 minikube localkube[3289]: E0606 23:53:06.630342    3289 reflector.go:201] k8s.io/minikube/vendor/k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:8443/api/v1/secrets?resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jun 06 23:53:06 minikube localkube[3289]: [restful] 2017/06/06 23:53:06 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi/
Jun 06 23:53:06 minikube localkube[3289]: [restful] 2017/06/06 23:53:06 log.go:30: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Jun 06 23:53:06 minikube localkube[3289]: I0606 23:53:06.795283    3289 serve.go:79] Serving securely on 0.0.0.0:8443
Jun 06 23:53:06 minikube localkube[3289]: I0606 23:53:06.795530    3289 serve.go:94] Serving insecurely on 127.0.0.1:8080
Jun 06 23:53:06 minikube systemd[1]: Started Localkube.
Jun 06 23:53:06 minikube localkube[3289]: I0606 23:53:06.825926    3289 ready.go:30] Performing healthcheck on http://127.0.0.1:8080/healthz
Jun 06 23:53:06 minikube localkube[3289]: I0606 23:53:06.826437    3289 ready.go:42] Got healthcheck response: [+]ping ok
Jun 06 23:53:06 minikube localkube[3289]: [-]poststarthook/bootstrap-controller failed: reason withheld
Jun 06 23:53:06 minikube localkube[3289]: [+]poststarthook/extensions/third-party-resources ok
Jun 06 23:53:06 minikube localkube[3289]: [-]poststarthook/ca-registration failed: reason withheld
Jun 06 23:53:06 minikube localkube[3289]: healthz check failed
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.666369    3289 trace.go:61] Trace "Create /api/v1/namespaces/kube-system/configmaps" (started 2017-06-06 23:53:06.825611262 +0000 UTC):
Jun 06 23:53:07 minikube localkube[3289]: [17.389碌s] [17.389碌s] About to convert to expected version
Jun 06 23:53:07 minikube localkube[3289]: [65.828碌s] [48.439碌s] Conversion done
Jun 06 23:53:07 minikube localkube[3289]: [837.542234ms] [837.476406ms] About to store object in database
Jun 06 23:53:07 minikube localkube[3289]: [840.579055ms] [3.036821ms] Object stored in database
Jun 06 23:53:07 minikube localkube[3289]: [840.596963ms] [17.908碌s] Self-link added
Jun 06 23:53:07 minikube localkube[3289]: "Create /api/v1/namespaces/kube-system/configmaps" [840.674421ms] [77.458碌s] END
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.672209    3289 trace.go:61] Trace "Create /api/v1/namespaces/default/services" (started 2017-06-06 23:53:06.82994091 +0000 UTC):
Jun 06 23:53:07 minikube localkube[3289]: [13.69碌s] [13.69碌s] About to convert to expected version
Jun 06 23:53:07 minikube localkube[3289]: [65.9碌s] [52.21碌s] Conversion done
Jun 06 23:53:07 minikube localkube[3289]: [833.908578ms] [833.842678ms] About to store object in database
Jun 06 23:53:07 minikube localkube[3289]: [842.122766ms] [8.214188ms] Object stored in database
Jun 06 23:53:07 minikube localkube[3289]: [842.13627ms] [13.504碌s] Self-link added
Jun 06 23:53:07 minikube localkube[3289]: "Create /api/v1/namespaces/default/services" [842.208549ms] [72.279碌s] END
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.826354    3289 ready.go:30] Performing healthcheck on http://127.0.0.1:8080/healthz
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.830087    3289 ready.go:42] Got healthcheck response: ok
Jun 06 23:53:07 minikube localkube[3289]: apiserver is ready!
Jun 06 23:53:07 minikube localkube[3289]: Starting controller-manager...
Jun 06 23:53:07 minikube localkube[3289]: Waiting for controller-manager to be healthy...
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.832016    3289 leaderelection.go:179] attempting to acquire leader lease...
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.838839    3289 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.845124    3289 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"4a2317c0-4b13-11e7-90b7-080027e49652",APIVersion:"v1", ResourceVersion:"13", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.857692    3289 controllermanager.go:437] Started "podgc"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.867462    3289 controllermanager.go:437] Started "garbagecollector"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.867950    3289 controllermanager.go:437] Started "daemonset"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.868429    3289 controllermanager.go:437] Started "job"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.868792    3289 controllermanager.go:437] Started "disruption"
Jun 06 23:53:07 minikube localkube[3289]: W0606 23:53:07.868890    3289 controllermanager.go:421] "bootstrapsigner" is disabled
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.869205    3289 controllermanager.go:437] Started "endpoint"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.869611    3289 controllermanager.go:437] Started "resourcequota"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.870351    3289 controllermanager.go:437] Started "replicaset"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.870914    3289 controllermanager.go:437] Started "statefuleset"
Jun 06 23:53:07 minikube localkube[3289]: E0606 23:53:07.871151    3289 certificates.go:38] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
Jun 06 23:53:07 minikube localkube[3289]: W0606 23:53:07.871293    3289 controllermanager.go:434] Skipping "certificatesigningrequests"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.871605    3289 controllermanager.go:437] Started "replicationcontroller"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.871852    3289 controllermanager.go:437] Started "ttl"
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.870629    3289 disruption.go:269] Starting disruption controller
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.871014    3289 replica_set.go:155] Starting ReplicaSet controller
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.870459    3289 daemoncontroller.go:199] Starting Daemon Sets controller manager
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.871261    3289 stateful_set.go:144] Starting statefulset controller
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.870565    3289 garbagecollector.go:111] Garbage Collector: Initializing
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.870670    3289 resource_quota_controller.go:240] Startingresource quota controller
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.872866    3289 replication_controller.go:150] Starting RCManager
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.872982    3289 ttlcontroller.go:117] Starting TTL controller
Jun 06 23:53:07 minikube localkube[3289]: I0606 23:53:07.985084    3289 garbagecollector.go:116] Garbage Collector: All resource monitors have synced. Proceeding to collect garbage
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.381205    3289 controllermanager.go:437] Started "namespace"
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.381932    3289 namespace_controller.go:189] Starting the NamespaceController
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.384943    3289 controllermanager.go:437] Started "deployment"
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.386345    3289 controllermanager.go:437] Started "horizontalpodautoscaling"
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.386629    3289 horizontal.go:139] Starting HPA Controller
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.385309    3289 deployment_controller.go:151] Starting deployment controller
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.387842    3289 controllermanager.go:437] Started "cronjob"
Jun 06 23:53:08 minikube localkube[3289]: W0606 23:53:08.388191    3289 controllermanager.go:421] "tokencleaner" is disabled
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.388138    3289 cronjob_controller.go:95] Starting CronJobManager
Jun 06 23:53:08 minikube localkube[3289]: E0606 23:53:08.389303    3289 util.go:45] Metric for serviceaccount_controller already registered
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.389489    3289 controllermanager.go:437] Started "serviceaccount"
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.389584    3289 plugins.go:101] No cloud provider specified.
Jun 06 23:53:08 minikube localkube[3289]: W0606 23:53:08.389658    3289 controllermanager.go:449] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Jun 06 23:53:08 minikube localkube[3289]: W0606 23:53:08.389737    3289 controllermanager.go:453] Unsuccessful parsing of service CIDR : invalid CIDR address:
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.389906    3289 nodecontroller.go:219] Sending events to api server.
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.390029    3289 taint_controller.go:157] Sending events toapi server.
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.390256    3289 serviceaccounts_controller.go:122] Starting ServiceAccount controller
Jun 06 23:53:08 minikube localkube[3289]: E0606 23:53:08.390417    3289 controllermanager.go:494] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Jun 06 23:53:08 minikube localkube[3289]: W0606 23:53:08.390536    3289 controllermanager.go:506] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.390631    3289 controllermanager.go:519] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.391430    3289 attach_detach_controller.go:223] Starting Attach Detach Controller
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.484165    3289 disruption.go:277] Sending events to api server.
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.491617    3289 taint_controller.go:180] Starting NoExecuteTaintManager
Jun 06 23:53:08 minikube localkube[3289]: controller-manager is ready!
Jun 06 23:53:08 minikube localkube[3289]: Starting scheduler...
Jun 06 23:53:08 minikube localkube[3289]: Waiting for scheduler to be healthy...
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.837347    3289 leaderelection.go:179] attempting to acquire leader lease...
Jun 06 23:53:08 minikube localkube[3289]: E0606 23:53:08.838891    3289 server.go:157] unable to register configz:register config "componentconfig" twice
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.865764    3289 leaderelection.go:189] successfully acquired lease kube-system/kube-scheduler
Jun 06 23:53:08 minikube localkube[3289]: I0606 23:53:08.866051    3289 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"4abf18d3-4b13-11e7-90b7-080027e49652", APIVersion:"v1", ResourceVersion:"24", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Jun 06 23:53:09 minikube localkube[3289]: scheduler is ready!
Jun 06 23:53:09 minikube localkube[3289]: Starting kubelet...
Jun 06 23:53:09 minikube localkube[3289]: Waiting for kubelet to be healthy...
Jun 06 23:53:09 minikube localkube[3289]: I0606 23:53:09.834906    3289 feature_gate.go:144] feature gates: map[]
Jun 06 23:53:09 minikube localkube[3289]: E0606 23:53:09.835333    3289 server.go:312] unable to register configz:register config "componentconfig" twice
Jun 06 23:53:09 minikube localkube[3289]: W0606 23:53:09.835713    3289 server.go:715] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Using default client config instead.
Jun 06 23:53:09 minikube localkube[3289]: I0606 23:53:09.995663    3289 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Jun 06 23:53:09 minikube localkube[3289]: I0606 23:53:09.996086    3289 docker.go:384] Start docker client with request timeout=2m0s
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.006666    3289 manager.go:143] cAdvisor running in container: "/system.slice/localkube.service"
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.011822    3289 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.012654    3289 manager.go:198] Machine: {NumCores:2 CpuFrequency:2711940 MemoryCapacity:2097647616 MachineID:02b51a9198734859a80fdb14c10008d2 SystemUUID:5AAB2A2E-B655-4649-B70E-CFA9A52B5A62 BootID:45ef9ba5-fb76-430c-9853-5db4bad3314a Filesystems:[{Device:/dev/sda1 Capacity:19163156480 Type:vfs Inodes:2434064 HasInodes:true} {Device:rootfs Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:e4:96:52Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:db:e2:51 Speed:1000 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097647616 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.014018    3289 manager.go:204] Version: {KernelVersion:4.7.2 ContainerOsVersion:Buildroot 2016.08 DockerVersion:1.11.1 CadvisorVersion: CadvisorRevision:}
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.014413    3289 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jun 06 23:53:10 minikube localkube[3289]: W0606 23:53:10.016745    3289 container_manager_linux.go:218] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.017520    3289 container_manager_linux.go:245] container manager verified user specified cgroup-root exists: /
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.017552    3289 container_manager_linux.go:250] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.018316    3289 kubelet.go:255] Adding manifest file: /etc/kubernetes/manifests
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.018338    3289 kubelet.go:265] Watching apiserver
Jun 06 23:53:10 minikube localkube[3289]: W0606 23:53:10.024184    3289 kubelet_network.go:70] Hairpin mode set to"promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.024348    3289 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.037586    3289 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.038925    3289 docker_service.go:204] Setting cgroupDriver to cgroupfs
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.040944    3289 remote_runtime.go:41] Connecting to runtime service /var/run/dockershim.sock
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.041817    3289 kuberuntime_manager.go:171] Container runtime docker initialized, version: 1.11.1, apiVersion: 1.23.0
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.042059    3289 kuberuntime_manager.go:902] updating runtime config through cri with podcidr 10.180.1.0/24
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.042275    3289 docker_service.go:277] docker cri receivedruntime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.180.1.0/24,},}
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.042593    3289 kubelet_network.go:326] Setting Pod CIDR:-> 10.180.1.0/24
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.043637    3289 server.go:869] Started kubelet v1.6.4
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.044021    3289 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.045637    3289 server.go:127] Starting to listen on 0.0.0.0:10250
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.046599    3289 server.go:294] Adding debug handlers to kubelet server.
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.046814    3289 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.062172    3289 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.062205    3289 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.074191    3289 event.go:259] Could not construct reference to: '&v1.Node{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/os":"linux", "beta.kubernetes.io/arch":"amd64", "kubernetes.io/hostname":"minikube"}, Annotations:map[string]string{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.NodeSpec{PodCIDR:"", ExternalID:"minikube", ProviderID:"", Unschedulable:false, Taints:[]v1.Taint(nil)}, Status:v1.NodeStatus{Capacity:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2000, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2097647616, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Allocatable:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:1992790016, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:2000,scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Phase:"", Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62161334, loc:(*time.Location)(0x6b23540)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63632389
Jun 06 23:53:10 minikube localkube[3289]: 990, nsec:62161334, loc:(*time.Location)(0x6b23540)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, v1.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62215365, loc:(*time.Location)(0x6b23540)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62215365, loc:(*time.Location)(0x6b23540)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, v1.NodeCondition{Type:"DiskPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62223040, loc:(*time.Location)(0x6b23540)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62223040, loc:(*time.Location)(0x6b23540)}}, Reason:"KubeletHasNoDiskPressure", Message:"kubelet has no disk pressure"}, v1.NodeCondition{Type:"Ready", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62225868, loc:(*time.Location)(0x6b23540)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63632389990, nsec:62225868, loc:(*time.Location)(0x6b23540)}}, Reason:"KubeletNotReady", Message:"container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,network state unknown"}}, Addresses:[]v1.NodeAddress{v1.NodeAddress{Type:"LegacyHostIP", Address:"192.168.99.100"}, v1.NodeAddress{Type:"InternalIP", Address:"192.168.99.100"}, v1.NodeAddress{Type:"Hostname", Address:"minikube"}}, DaemonEndpoints:v1.NodeDaemonEndpoints{KubeletEndpoint:v1.DaemonEndpoint{Port:10250}}, NodeInfo:v1.NodeSystemInfo{MachineID:"02b51a9198734859a80fdb14c10008d2", SystemUUID:"5AAB2A2E-B655-4649-B70E-CFA9A52B5A62", BootID:"45ef9ba5-fb76-430c-9853-5db4bad3314a", KernelVersion:"4.7.2", OSImage:"Buildroot 2016.08", ContainerRuntimeVersion:"docker://1.11.1", KubeletVersion:"v1.6.4", KubeProxyVersion:"v1.6.4", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]v1.ContainerImage(nil), VolumesInUse:[]v1.UniqueVolumeName(nil), VolumesAttached
Jun 06 23:53:10 minikube localkube[3289]: :[]v1.AttachedVolume(nil)}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'NodeAllocatableEnforced' 'Updated Node Allocatable limit across pods'
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.076093    3289 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.076191    3289 volume_manager.go:248] Starting Kubelet Volume Manager
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.076194    3289 status_manager.go:140] Starting to sync pod status with apiserver
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.076239    3289 kubelet.go:1741] Starting kubelet main sync loop.
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.076250    3289 kubelet.go:1752] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.079341    3289 factory.go:309] Registering Docker factory
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.079973    3289 factory.go:89] Registering Rkt factory
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.080074    3289 factory.go:54] Registering systemd factory
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.080584    3289 factory.go:86] Registering Raw factory
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.080757    3289 manager.go:1106] Started watching for new ooms in manager
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.086751    3289 oomparser.go:185] oomparser using systemd
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.087381    3289 manager.go:288] Starting recovery of all containers
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.147607    3289 manager.go:293] Recovery completed
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.148801    3289 rkt.go:56] starting detectRktContainers thread
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.199221    3289 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.200449    3289 kubelet_node_status.go:77] Attempting to register node minikube
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.202969    3289 kubelet_node_status.go:80] Successfully registered node minikube
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.203511    3289 actual_state_of_world.go:461] Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.205167    3289 actual_state_of_world.go:475] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.209483    3289 kuberuntime_manager.go:902] updating runtime config through cri with podcidr
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.213357    3289 docker_service.go:277] docker cri receivedruntime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.215292    3289 kubelet_network.go:326] Setting Pod CIDR: 10.180.1.0/24 ->
Jun 06 23:53:10 minikube localkube[3289]: kubelet is ready!
Jun 06 23:53:10 minikube localkube[3289]: Starting proxy...
Jun 06 23:53:10 minikube localkube[3289]: Waiting for proxy to be healthy...
Jun 06 23:53:10 minikube localkube[3289]: E0606 23:53:10.837352    3289 server.go:139] unable to register configz:register config "componentconfig" twice
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.859779    3289 server.go:225] Using iptables Proxier.
Jun 06 23:53:10 minikube localkube[3289]: W0606 23:53:10.862501    3289 proxier.go:298] clusterCIDR not specified,unable to distinguish between internal and external traffic
Jun 06 23:53:10 minikube localkube[3289]: I0606 23:53:10.862555    3289 server.go:249] Tearing down userspace rules.
Jun 06 23:53:11 minikube localkube[3289]: I0606 23:53:10.999511    3289 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Jun 06 23:53:11 minikube localkube[3289]: I0606 23:53:11.004534    3289 conntrack.go:66] Setting conntrack hashsize to 32768
Jun 06 23:53:11 minikube localkube[3289]: I0606 23:53:11.009759    3289 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Jun 06 23:53:11 minikube localkube[3289]: I0606 23:53:11.010235    3289 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Jun 06 23:53:11 minikube localkube[3289]: proxy is ready!
Jun 06 23:53:11 minikube localkube[3289]: Starting storage-provisioner...
Jun 06 23:53:11 minikube localkube[3289]: Waiting for storage-provisioner to be healthy...
Jun 06 23:53:11 minikube localkube[3289]: I0606 23:53:11.859135    3289 controller.go:249] Starting provisioner controller 4c86f1d2-4b13-11e7-90b7-080027e49652!
Jun 06 23:53:12 minikube localkube[3289]: storage-provisioner is ready!
Jun 06 23:53:13 minikube localkube[3289]: I0606 23:53:13.491769    3289 nodecontroller.go:612] Initializing eviction metric for zone:
Jun 06 23:53:13 minikube localkube[3289]: W0606 23:53:13.492016    3289 nodecontroller.go:947] Missing timestamp for Node minikube. Assuming now as a timestamp.
Jun 06 23:53:13 minikube localkube[3289]: I0606 23:53:13.492123    3289 nodecontroller.go:863] NodeController detected that zone  is now in state Normal.
Jun 06 23:53:13 minikube localkube[3289]: I0606 23:53:13.492574    3289 event.go:217] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"4b8bfd86-4b13-11e7-90b7-080027e49652", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in NodeController
Jun 06 23:53:15 minikube localkube[3289]: I0606 23:53:15.248090    3289 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/8538d869917f857f9d157e66b059d05b-addons" (spec.Name: "addons") pod "8538d869917f857f9d157e66b059d05b" (UID: "8538d869917f857f9d157e66b059d05b")
Jun 06 23:53:15 minikube localkube[3289]: I0606 23:53:15.350935    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/8538d869917f857f9d157e66b059d05b-addons" (spec.Name: "addons") pod "8538d869917f857f9d157e66b059d05b" (UID: "8538d869917f857f9d157e66b059d05b").
Jun 06 23:53:15 minikube localkube[3289]: I0606 23:53:15.409332    3289 kuberuntime_manager.go:458] Container {Name:kube-addon-manager Image:gcr.io/google-containers/kube-addon-manager:v6.4-beta.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:5 scale:-3} d:{Dec:<nil>} s:5m Format:DecimalSI} memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}]} VolumeMounts:[{Name:addons ReadOnly:true MountPath:/etc/kubernetes/ SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 06 23:53:30 minikube localkube[3289]: W0606 23:53:30.305689    3289 conversion.go:110] Could not get instant cpu stats: different number of cpus
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.299754    3289 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"58b72c1d-4b13-11e7-90b7-080027e49652", APIVersion:"v1", ResourceVersion:"74", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-cqchx
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.312943    3289 replication_controller.go:206] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.318663    3289 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-cqchx", UID:"58b79ee3-4b13-11e7-90b7-080027e49652", APIVersion:"v1", ResourceVersion:"75", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kubernetes-dashboard-cqchx to minikube
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.328134    3289 replication_controller.go:206] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.376646    3289 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/58b79ee3-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58b79ee3-4b13-11e7-90b7-080027e49652" (UID: "58b79ee3-4b13-11e7-90b7-080027e49652")
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.447723    3289 event.go:217] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"58cc2a8c-4b13-11e7-90b7-080027e49652", APIVersion:"extensions", ResourceVersion:"86", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-196007617 to 1
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.461375    3289 event.go:217] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-196007617", UID:"58ccd127-4b13-11e7-90b7-080027e49652", APIVersion:"extensions", ResourceVersion:"87", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-196007617-txzkr
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.494152    3289 event.go:217] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-196007617-txzkr", UID:"58ced53e-4b13-11e7-90b7-080027e49652", APIVersion:"v1", ResourceVersion:"89", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-196007617-txzkr to minikube
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.498781    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58b79ee3-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58b79ee3-4b13-11e7-90b7-080027e49652" (UID: "58b79ee3-4b13-11e7-90b7-080027e49652").
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.626193    3289 kuberuntime_manager.go:458] Container {Name:kubernetes-dashboard Image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-tcf2l ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.678681    3289 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/configmap/58ced53e-4b13-11e7-90b7-080027e49652-kube-dns-config"(spec.Name: "kube-dns-config") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652")
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.678755    3289 reconciler.go:231] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/58ced53e-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652")
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.784950    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/configmap/58ced53e-4b13-11e7-90b7-080027e49652-kube-dns-config" (spec.Name:"kube-dns-config") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.790099    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58ced53e-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.797952    3289 kuberuntime_manager.go:458] Container {Name:kubedns Image:gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2 Command:[] Args:[--domain=cluster.local. --dns-port=10053 --config-map=kube-dns --v=2] WorkingDir: Ports:[{Name:dns-local HostPort:0 ContainerPort:10053 Protocol:UDP HostIP:} {Name:dns-tcp-local HostPort:0 ContainerPort:10053 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:10055 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:PROMETHEUS_PORT Value:10055 ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/kube-dns-config SubPath:} {Name:default-token-tcf2l ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/kubedns,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:8081,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says thatwe should restart it.
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.798014    3289 kuberuntime_manager.go:458] Container {Name:dnsmasq Image:gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2 Command:[] Args:[-v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --log-facility=- --server=127.0.0.1#10053] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:150 scale:-3} d:{Dec:<nil>} s:150m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-dns-config ReadOnly:false MountPath:/etc/k8s/dns/dnsmasq-nanny SubPath:} {Name:default-token-tcf2l ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck/dnsmasq,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 06 23:53:32 minikube localkube[3289]: I0606 23:53:32.798089    3289 kuberuntime_manager.go:458] Container {Name:sidecar Image:gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 Command:[] Args:[--v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A] WorkingDir: Ports:[{Name:metrics HostPort:0 ContainerPort:10054 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:default-token-tcf2l ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:10054,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 06 23:53:32 minikube localkube[3289]: W0606 23:53:32.817528    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-cqchx through plugin: invalid network status for
Jun 06 23:53:32 minikube localkube[3289]: W0606 23:53:32.913571    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-196007617-txzkr through plugin: invalid network status for
Jun 06 23:53:33 minikube localkube[3289]: W0606 23:53:33.248337    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-196007617-txzkr through plugin: invalid network status for
Jun 06 23:53:33 minikube localkube[3289]: W0606 23:53:33.257453    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-cqchx through plugin: invalid network status for
Jun 06 23:53:40 minikube localkube[3289]: W0606 23:53:40.356039    3289 conversion.go:110] Could not get instant cpu stats: different number of cpus
Jun 06 23:53:41 minikube localkube[3289]: W0606 23:53:41.293648    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kubernetes-dashboard-cqchx through plugin: invalid network status for
Jun 06 23:53:41 minikube localkube[3289]: I0606 23:53:41.306411    3289 replication_controller.go:206] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
Jun 06 23:53:41 minikube localkube[3289]: I0606 23:53:41.384119    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58b79ee3-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58b79ee3-4b13-11e7-90b7-080027e49652" (UID: "58b79ee3-4b13-11e7-90b7-080027e49652").
Jun 06 23:53:42 minikube localkube[3289]: I0606 23:53:42.393914    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58b79ee3-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58b79ee3-4b13-11e7-90b7-080027e49652" (UID: "58b79ee3-4b13-11e7-90b7-080027e49652").
Jun 06 23:53:47 minikube localkube[3289]: W0606 23:53:47.353926    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-196007617-txzkr through plugin: invalid network status for
Jun 06 23:53:54 minikube localkube[3289]: W0606 23:53:54.410689    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-196007617-txzkr through plugin: invalid network status for
Jun 06 23:53:59 minikube localkube[3289]: W0606 23:53:59.561725    3289 kuberuntime_container.go:150] Non-root verification doesn't support non-numeric user (nobody)
Jun 06 23:54:00 minikube localkube[3289]: W0606 23:54:00.456794    3289 docker_sandbox.go:263] Couldn't find network status for kube-system/kube-dns-196007617-txzkr through plugin: invalid network status for
Jun 06 23:54:00 minikube localkube[3289]: I0606 23:54:00.499366    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/configmap/58ced53e-4b13-11e7-90b7-080027e49652-kube-dns-config" (spec.Name:"kube-dns-config") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:54:00 minikube localkube[3289]: I0606 23:54:00.499622    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58ced53e-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:54:01 minikube localkube[3289]: I0606 23:54:01.505478    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58ced53e-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:54:01 minikube localkube[3289]: I0606 23:54:01.506606    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/configmap/58ced53e-4b13-11e7-90b7-080027e49652-kube-dns-config" (spec.Name:"kube-dns-config") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:55:02 minikube localkube[3289]: E0606 23:55:02.503291    3289 event.go:259] Could not construct reference to: '&v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-196007617-txzkr", UID:"58ced53e-4b13-11e7-90b7-080027e49652", APIVersion:"v1", ResourceVersion:"94", FieldPath:"spec.containers{dnsmasq}"}' due to: 'object does not implement the List interfaces'. Will not report event: 'Warning' 'Unhealthy' 'Liveness probe failed: HTTP probe failed with statuscode: 503'
Jun 06 23:55:08 minikube localkube[3289]: I0606 23:55:08.106228    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58b79ee3-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58b79ee3-4b13-11e7-90b7-080027e49652" (UID: "58b79ee3-4b13-11e7-90b7-080027e49652").
Jun 06 23:55:12 minikube localkube[3289]: E0606 23:55:12.513372    3289 event.go:259] Could not construct reference to: '&v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-196007617-txzkr", UID:"58ced53e-4b13-11e7-90b7-080027e49652", APIVersion:"v1", ResourceVersion:"94", FieldPath:"spec.containers{dnsmasq}"}' due to: 'object does not implement the List interfaces'. Will not report event: 'Warning' 'Unhealthy' 'Liveness probe failed: HTTP probe failed with statuscode: 503'
Jun 06 23:55:27 minikube localkube[3289]: I0606 23:55:27.109408    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/58ced53e-4b13-11e7-90b7-080027e49652-default-token-tcf2l" (spec.Name: "default-token-tcf2l") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").
Jun 06 23:55:27 minikube localkube[3289]: I0606 23:55:27.110020    3289 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/configmap/58ced53e-4b13-11e7-90b7-080027e49652-kube-dns-config" (spec.Name:"kube-dns-config") pod "58ced53e-4b13-11e7-90b7-080027e49652" (UID: "58ced53e-4b13-11e7-90b7-080027e49652").


For what's it worth, I managed to get minikube working by using --vm-driver=kvm. Of course that doesn't help on macOS...

Any things I can try to give further information?

@amitsaha could you attach "minikube ip" and the contents of "kubectl config view"?

Sure, @dlorenc here it is:

$ minikube ip
192.168.99.100

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/asaha/.minikube/ca.crt
    server: https://192.168.99.100:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /home/asaha/.minikube/apiserver.crt
    client-key: /home/asaha/.minikube/apiserver.key

$ minikube version
minikube version: v0.19.1

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

i get Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Was this page helpful?
0 / 5 - 0 ratings