Minikube: certificate signed by unknown authority

Created on 17 May 2016  ยท  29Comments  ยท  Source: kubernetes/minikube

I was using minikube, then I started using an AWS cluster then switched back to minikube and now I get this :(

โžœ  minikube git:(master) kubectl cluster-info
error: couldn't read version from server: Get https://192.168.99.100:443/api: x509: certificate signed by unknown authority

I don't really understand what's going on here?

kinbug

All 29 comments

Is it possible your VM stopped and started again with a new IP? If you run "minikube start", it should re-copy the certs from the VM.

I just ran into this error as well. I was using minikube with no problems, then stopped the VM. Restarting the VM resulted in this:

$ ./out/minikube start
Starting local Kubernetes cluster...
2016/05/17 16:17:47 Machine exists!
(minikubeVM) Check network to re-create if needed...
(minikubeVM) Waiting for an IP...
2016/05/17 16:18:16
Kubernetes is available at https://192.168.99.103:443.
2016/05/17 16:18:16 Error configuring authentication:  Something went wrong running an SSH command!
command : sudo cat /var/lib/localkube/certs/apiserver.crt
err     : exit status 1
output  : cat: can't open '/var/lib/localkube/certs/apiserver.crt': No such file or directory

The IP of the VM did not change.

Running kubectl commands gives me:

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikube")

My certs don't seem to exist in /var/lib/localkube/certs/apiserver.crt but rather in $HOME/.minikube/ which is probably why things are breaking.

Deleting the VM and recreating doesn't help.

./out/minikube delete
make out/minikube
./out/minikube start

I'm on HEAD.

Looking at this now.

The path /var/lib/localkube/certs/apiserver.crt refers to the bits inside the VM; not on your laptop.

One other idea, could you try:

make clean
make out/minikube

It's possible you have a stale build of localkube which is putting the certs in a different path in the VM.

That fixed it for me, thanks!

Awesome, looks like we need to fix the Makefile a bit so it triggers builds when localkube code changes.

After cleaning and pulling, I am unable to build:

โžœ  minikube git:(master) make out/minikube
mkdir -p /Users/davidsmith/style/minikube/.gopath/src/k8s.io
ln -s -f /Users/davidsmith/style/minikube /Users/davidsmith/style/minikube/.gopath/src/k8s.io/minikube
docker run -w /go/src/k8s.io/minikube -e IN_DOCKER=1 -v /Users/davidsmith/style/minikube:/go/src/k8s.io/minikube golang:1.6 make out/localkube
make: *** No rule to make target 'out/localkube'.  Stop.
make: *** [out/localkube] Error 2

What am I doing wrong?

How are you running docker? Could you attach the output of go env?

Seems like a bug in the Makefile target or something

On 18 May 2016, at 16:06, David Smith [email protected] wrote:

After cleaning and pulling, I am unable to build:

โžœ minikube git:(master) make out/minikube
mkdir -p /Users/davidsmith/style/minikube/.gopath/src/k8s.io
ln -s -f /Users/davidsmith/style/minikube /Users/davidsmith/style/minikube/.gopath/src/k8s.io/minikube
docker run -w /go/src/k8s.io/minikube -e IN_DOCKER=1 -v /Users/davidsmith/style/minikube:/go/src/k8s.io/minikube golang:1.6 make out/localkube
make: ** No rule to make target 'out/localkube'. Stop.
make: *
* [out/localkube] Error 2
What am I doing wrong?

โ€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub

My guess is that the volume mount into the docker container isn't working for some reason.

The output of "docker version" and "env | grep DOCKER" might be useful.

One change we could make to help here would be to switch the docker-based cross compilation to use a "docker build"/cat strategy instead of using volume mounts. Then remote hosts would work too.

โžœ env | grep DOCKER
DOCKER_TLS_VERIFY=1
DOCKER_HOST=tcp://192.168.99.101:2376
DOCKER_CERT_PATH=/Users/davidsmith/.docker/machine/machines/dinghy
DOCKER_MACHINE_NAME=dinghy
โžœ docker version
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 18:01:15 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   9e83765
 Built:        2016-02-11T20:39:58.688092588+00:00
 OS/Arch:      linux/amd64
โžœ  go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.6/libexec"
GOTOOLDIR="/usr/local/Cellar/go/1.6/libexec/pkg/tool/darwin_amd64"
GO15VENDOREXPERIMENT="1"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fno-common"
CXX="clang++"
CGO_ENABLED="1"

Can you run these commands (from the minikube folder):

docker run -w /go/src/k8s.io/minikube -e IN_DOCKER=1 -v /Users/dlorenc/go/src/k8s.io/minikube:/go/src/k8s.io/minikube golang:1.6 ls -a

docker run -w /go/src/k8s.io/minikube -e IN_DOCKER=1 -v /Users/dlorenc/go/src/k8s.io/minikube:/go/src/k8s.io/minikube golang:1.6 cat Makefile

Below are the commands I ran, based on what @dlorenc asked and what the Makefile showed

โžœ  minikube git:(master) pwd
/Users/davidsmith/style/minikube
โžœ  minikube git:(master) docker run -w /go/src/k8s.io/minikube -e IN_DOCKER=1 -v /Users/davidsmith/style/minikube:/go/src/k8s.io/minikube golang:1.6 ls -a
.
..
โžœ  minikube git:(master) docker run -w /go/src/k8s.io/minikube -e IN_DOCKER=1 -v /Users/davidsmith/style/minikube:/go/src/k8s.io/minikube golang:1.6 cat Makefile
cat: Makefile: No such file or directory

I don't really understand this, it seems that the minikube directory is being loaded as a volume but nothing in it?

Yeah, so the way Docker works on Mac's is a little weird. I think there's a problem with your docker-machine setup, deleting and recreating it might help. Basically, the volume parameters to docker run specify how to mount things from the _docker_ host into the docker container.

When you're on OSX, the _docker_ host is actually the virtualbox VM, not your laptop. To work around this, docker-machine/virtualbox setup automatic mounting of directories from the laptop into the virtualbox VM, so the files can make it all the way into a container.

That's the part that doesn't seem to be working for you. Here's an old issue that explains a little bit about how this works: https://github.com/docker/machine/issues/1826

TLDR; if your minikube repo is under /Users (which it looks like it is), this should just work :(

Indeed, if I ssh into my docker machine, my Users directory and all it's folder structure are there but for some reason there are no files! I don't get that at all, how can there be directories but no files. At least I know the issue now and it doesn't seem to be with minikube, thanks.

Ok, it seems Homebrew was stuck on an old version of docker and dinghy was old as well, I upgraded dinghy then forced brew to switch to the latest version of docker and now it all seems to be working. Just running make out/minikube now, it seems to be taking a while, I presume that's normal?

Woohoo, finally working although I had to up my docker-machine ram to 4Gb for the initial build and also after that had to set my GOPATH

โžœ  minikube git:(master) make out/minikube
go get github.com/jteeuwen/go-bindata/...
package github.com/jteeuwen/go-bindata/...: cannot download, $GOPATH not set. For more details see: go help gopath
make: *** [/Users/davidsmith/style/minikube/.gopath/bin/go-bindata] Error 1
export GOPATH=/Users/davidsmith/style/minikube/.gopath

Now I have minikube running, thanks for your help!

Oh dear, when I stop minikube and then start it again I get the original error

โžœ  minikube git:(master) ./out/minikube stop
Stopping local Kubernetes cluster...
Stopping "minikubeVM"...
Machine "minikubeVM" was stopped.
Machine stopped.                                                                                                                                                                           โžœ  minikube git:(master) ./out/minikube start
Starting local Kubernetes cluster...
2016/05/18 23:15:33 Machine exists!
(minikubeVM) Check network to re-create if needed...
(minikubeVM) Waiting for an IP...
2016/05/18 23:16:03
Kubernetes is available at https://192.168.99.100:443.
2016/05/18 23:16:03 Error configuring authentication:  Something went wrong running an SSH command!
command : sudo cat /var/lib/localkube/certs/apiserver.crt
err     : exit status 1
output  : cat: can't open '/var/lib/localkube/certs/apiserver.crt': No such file or directory

Sorry for the trouble! Could you try ssh'ing into the minikube VM and get the output of /var/log/localkube.err and /var/log/localkube.out ?

You can ssh into the VM by finding the IP (from kubectl config view) and using username "docker" password "tcuser":

ssh [email protected]

Hmm, I tried stopping minikube, make clean then building and starting again and it still has the issue. Below is the output you asked for, I also tried to curl localhost:8080 with no errors (after I had seen this line in localkube.out, not sure what that's about?

docker@minikubeVM:~$ cat /var/log/localkube.out
Regenerating certs because the files aren't readable
Creating cert with IPs:  [10.0.0.0 127.0.0.1 10.0.2.15 192.168.99.105 172.17.0.1 ::1 fe80::a00:27ff:fe51:d1cf fe80::a00:27ff:fe56:e97f]
Starting etcd...
Starting apiserver...
Starting controller-manager...
Starting scheduler...
Starting kubelet...
Starting proxy...
Starting dns...
Failed to check for kube-system namespace existence: Get http://localhost:8080/api/v1/namespaces/kube-system: dial tcp 127.0.0.1:8080: getsockopt: connection refused
docker@minikubeVM:~$ cat /var/log/localkube.err
I0518 22:32:20.140211    1471 server.go:217] Using userspace Proxier.
I0518 22:32:20.157502    1471 server.go:237] Tearing down pure-iptables proxy rules.
E0518 22:32:20.189988    1471 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.190042    1471 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I0518 22:32:20.203829    1471 conntrack.go:36] Setting nf_conntrack_max to 262144
I0518 22:32:20.203872    1471 conntrack.go:41] Setting conntrack hashsize to 65536
I0518 22:32:20.203978    1471 conntrack.go:46] Setting nf_conntrack_tcp_timeout_established to 86400
I0518 22:32:20.216206    1471 genericapiserver.go:606] Will report 10.0.2.15 as public IP address.
I0518 22:32:20.217364    1471 genericapiserver.go:288] Node port range unspecified. Defaulting to 30000-32767.
E0518 22:32:20.241636    1471 controllermanager.go:121] unable to register configz: register config "componentconfig" twice
I0518 22:32:20.242213    1471 plugins.go:71] No cloud provider specified.
I0518 22:32:20.242319    1471 nodecontroller.go:144] Sending events to api server.
E0518 22:32:20.242468    1471 controllermanager.go:239] Failed to start service controller: ServiceController should not be run without a cloudprovider.
I0518 22:32:20.242478    1471 controllermanager.go:254] allocate-node-cidrs set to false, node controller not creating routes
E0518 22:32:20.242690    1471 util.go:45] Metric for replenishment_controller already registered
E0518 22:32:20.242700    1471 util.go:45] Metric for replenishment_controller already registered
E0518 22:32:20.242704    1471 util.go:45] Metric for replenishment_controller already registered
E0518 22:32:20.242718    1471 util.go:45] Metric for replenishment_controller already registered
E0518 22:32:20.242722    1471 util.go:45] Metric for replenishment_controller already registered
E0518 22:32:20.258345    1471 server.go:75] unable to register configz: register config "componentconfig" twice
I0518 22:32:20.260244    1471 replication_controller.go:236] Starting RC Manager
E0518 22:32:20.261329    1471 server.go:301] unable to register configz: register config "componentconfig" twice
E0518 22:32:20.314027    1471 controllermanager.go:285] Failed to get api versions from server: Get http://127.0.0.1:8080/api: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.352607    1471 reflector.go:216] pkg/controller/resourcequota/resource_quota_controller.go:193: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.352645    1471 reflector.go:205] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/controller.go:112: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.352667    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:118: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.352690    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:102: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.361641    1471 reflector.go:205] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364069    1471 reflector.go:205] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/admission/namespace/lifecycle/admission.go:116: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364101    1471 event.go:207] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
E0518 22:32:20.364123    1471 nodecontroller.go:239] Error monitoring node status: Get http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364403    1471 reflector.go:216] pkg/controller/node/nodecontroller.go:234: Failed to list *extensions.DaemonSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/daemonsets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364468    1471 reflector.go:216] pkg/controller/node/nodecontroller.go:233: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364524    1471 reflector.go:216] pkg/controller/node/nodecontroller.go:232: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364553    1471 reflector.go:216] pkg/controller/resourcequota/resource_quota_controller.go:193: Failed to list *api.ConfigMap: Get http://127.0.0.1:8080/api/v1/configmaps?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364639    1471 reflector.go:216] pkg/controller/resourcequota/resource_quota_controller.go:193: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364687    1471 reflector.go:216] pkg/controller/resourcequota/resource_quota_controller.go:193: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364706    1471 reflector.go:216] pkg/controller/resourcequota/resource_quota_controller.go:193: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364722    1471 reflector.go:216] pkg/controller/resourcequota/resource_quota_controller.go:190: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364776    1471 reflector.go:216] pkg/controller/gc/gc_controller.go:89: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=status.phase%21%3DPending%2Cstatus.phase%21%3DRunning%2Cstatus.phase%21%3DUnknown&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364794    1471 reflector.go:216] pkg/controller/replication/replication_controller.go:237: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364833    1471 reflector.go:216] pkg/controller/endpoint/endpoints_controller.go:157: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364890    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:365: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364911    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:360: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364931    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:355: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.364947    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:350: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.365005    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:349: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.365047    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:345: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.365068    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:342: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:20.365087    1471 reflector.go:216] k8s.io/minikube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:339: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I0518 22:32:20.458390    1471 kube2sky.go:484] Etcd server found: http://localhost:9090
W0518 22:32:20.925316    1471 server.go:461] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W0518 22:32:20.925332    1471 server.go:422] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I0518 22:32:20.925422    1471 plugins.go:71] No cloud provider specified.
I0518 22:32:20.925481    1471 manager.go:133] cAdvisor running in container: "/"
W0518 22:32:20.933246    1471 manager.go:141] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0518 22:32:20.933462    1471 fs.go:116] Filesystem partitions: map[tmpfs:{mountpoint:/ major:0 minor:15 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType: blockSize:0}]
E0518 22:32:21.190481    1471 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0518 22:32:21.190518    1471 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
W0518 22:32:21.198906    1471 controller.go:277] Resetting endpoints for master service "kubernetes" to kind:"" apiVersion:""
[restful] 2016/05/18 22:32:21 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:443/swaggerapi/
[restful] 2016/05/18 22:32:21 log.go:30: [restful/swagger] https://10.0.2.15:443/swaggerui/ is mapped to folder /swagger-ui/
I0518 22:32:21.213646    1471 genericapiserver.go:688] Serving securely on 0.0.0.0:443
I0518 22:32:21.213654    1471 genericapiserver.go:732] Serving insecurely on 127.0.0.1:8080
I0518 22:32:21.323606    1471 controllermanager.go:313] Starting extensions/v1beta1 apis
I0518 22:32:21.323623    1471 controllermanager.go:315] Starting horizontal pod controller.
I0518 22:32:21.323747    1471 controllermanager.go:330] Starting daemon set controller
I0518 22:32:21.323841    1471 controllermanager.go:337] Starting job controller
I0518 22:32:21.323916    1471 controllermanager.go:344] Starting deployment controller
I0518 22:32:21.323989    1471 controllermanager.go:351] Starting ReplicaSet controller
proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
I0518 22:32:21.324067    1471 controllermanager.go:360] Attempting to start petset, full resource map map[batch/v1:TypeMeta:<kind:"APIResourceList" apiVersion:"v1" > groupVersion:"batch/v1" resources:<name:"jobs" namespaced:true kind:"Job" > resources:<name:"jobs/status" namespaced:true kind:"Job" >  batch/v2alpha1:TypeMeta:<kind:"APIResourceList" apiVersion:"v1" > groupVersion:"batch/v2alpha1"  extensions/v1beta1:TypeMeta:<kind:"APIResourceList" apiVersion:"" > groupVersion:"extensions/v1beta1" resources:<name:"daemonsets" namespaced:true kind:"DaemonSet" > resources:<name:"daemonsets/status" namespaced:true kind:"DaemonSet" > resources:<name:"deployments" namespaced:true kind:"Deployment" > resources:<name:"deployments/rollback" namespaced:true kind:"DeploymentRollback" > resources:<name:"deployments/scale" namespaced:true kind:"Scale" > resources:<name:"deployments/status" namespaced:true kind:"Deployment" > resources:<name:"horizontalpodautoscalers" namespaced:true kind:"HorizontalPodAutoscaler" > resources:<name:"horizontalpodautoscalers/status" namespaced:true kind:"HorizontalPodAutoscaler" > resources:<name:"ingresses" namespaced:true kind:"Ingress" > resources:<name:"ingresses/status" namespaced:true kind:"Ingress" > resources:<name:"jobs" namespaced:true kind:"Job" > resources:<name:"jobs/status" namespaced:true kind:"Job" > resources:<name:"replicasets" namespaced:true kind:"ReplicaSet" > resources:<name:"replicasets/scale" namespaced:true kind:"Scale" > resources:<name:"replicasets/status" namespaced:true kind:"ReplicaSet" > resources:<name:"replicationcontrollers" namespaced:true kind:"ReplicationControllerDummy" > resources:<name:"replicationcontrollers/scale" namespaced:true kind:"Scale" > resources:<name:"thirdpartyresources" namespaced:false kind:"ThirdPartyResource" >  v1:TypeMeta:<kind:"APIResourceList" apiVersion:"" > groupVersion:"v1" resources:<name:"bindings" namespaced:true kind:"Binding" > resources:<name:"componentstatuses" namespaced:false kind:"ComponentStatus" > resources:<name:"configmaps" namespaced:true kind:"ConfigMap" > resources:<name:"endpoints" namespaced:true kind:"Endpoints" > resources:<name:"events" namespaced:true kind:"Event" > resources:<name:"limitranges" namespaced:true kind:"LimitRange" > resources:<name:"namespaces" namespaced:false kind:"Namespace" > resources:<name:"namespaces/finalize" namespaced:false kind:"Namespace" > resources:<name:"namespaces/status" namespaced:false kind:"Namespace" > resources:<name:"nodes" namespaced:false kind:"Node" > resources:<name:"nodes/proxy" namespaced:false kind:"Node" > resources:<name:"nodes/status" namespaced:false kind:"Node" > resources:<name:"persistentvolumeclaims" namespaced:true kind:"PersistentVolumeClaim" > resources:<name:"persistentvolumeclaims/status" namespaced:true kind:"PersistentVolumeClaim" > resources:<name:"persistentvolumes" namespaced:false kind:"PersistentVolume" > resources:<name:"persistentvolumes/status" namespaced:false kind:"PersistentVolume" > resources:<name:"pods" namespaced:true kind:"Pod" > resources:<name:"pods/attach" namespaced:true kind:"Pod" > resources:<name:"pods/binding" namespaced:true kind:"Binding" > resources:<name:"pods/exec" namespaced:true kind:"Pod" > resources:<name:"pods/log" namespaced:true kind:"Pod" > resources:<name:"pods/portforward" namespaced:true kind:"Pod" > resources:<name:"pods/proxy" namespaced:true kind:"Pod" > resources:<name:"pods/status" namespaced:true kind:"Pod" > resources:<name:"podtemplates" namespaced:true kind:"PodTemplate" > resources:<name:"replicationcontrollers" namespaced:true kind:"ReplicationController" > resources:<name:"replicationcontrollers/scale" namespaced:true kind:"Scale" > resources:<name:"replicationcontrollers/status" namespaced:true kind:"ReplicationController" > resources:<name:"resourcequotas" namespaced:true kind:"ResourceQuota" > resources:<name:"resourcequotas/status" namespaced:true kind:"ResourceQuota" > resources:<name:"secrets" namespaced:true kind:"Secret" > resources:<name:"serviceaccounts" namespaced:true kind:"ServiceAccount" > resources:<name:"services" namespaced:true kind:"Service" > resources:<name:"services/proxy" namespaced:true kind:"Service" > resources:<name:"services/status" namespaced:true kind:"Service" >  apps/v1alpha1:TypeMeta:<kind:"APIResourceList" apiVersion:"v1" > groupVersion:"apps/v1alpha1" resources:<name:"petsets" namespaced:true kind:"PetSet" > resources:<name:"petsets/status" namespaced:true kind:"PetSet" >  autoscaling/v1:TypeMeta:<kind:"APIResourceList" apiVersion:"v1" > groupVersion:"autoscaling/v1" resources:<name:"horizontalpodautoscalers" namespaced:true kind:"HorizontalPodAutoscaler" > resources:<name:"horizontalpodautoscalers/status" namespaced:true kind:"HorizontalPodAutoscaler" > ]
I0518 22:32:21.324263    1471 controllermanager.go:362] Starting apps/v1alpha1 apis
I0518 22:32:21.324268    1471 controllermanager.go:364] Starting PetSet controller
E0518 22:32:21.325167    1471 util.go:45] Metric for serviceaccount_controller already registered
I0518 22:32:21.325443    1471 horizontal.go:127] Starting HPA Controller
I0518 22:32:21.325679    1471 controller.go:223] Starting Daemon Sets controller manager
I0518 22:32:21.325760    1471 pet_set.go:144] Starting petset controller
I0518 22:32:21.326358    1471 attach_detach_controller.go:98] Starting Attach Detach Controller
W0518 22:32:21.326933    1471 request.go:347] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly.
W0518 22:32:21.356930    1471 request.go:347] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly.
I0518 22:32:21.459790    1471 kube2sky.go:551] Using http://127.0.0.1:8080 for kubernetes master
I0518 22:32:21.459801    1471 kube2sky.go:552] Using kubernetes API v1
I0518 22:32:21.459875    1471 kube2sky.go:620] Waiting for service: default/kubernetes
I0518 22:32:21.461111    1471 kube2sky.go:665] Successfully added DNS record for Kubernetes service.
I0518 22:32:22.939048    1471 machine.go:50] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0518 22:32:22.939100    1471 manager.go:182] Machine: {NumCores:1 CpuFrequency:2494226 MemoryCapacity:1044254720 MachineID: SystemUUID:8FE98337-2C0E-479B-BB76-68E68FD64803 BootID:a8806ab8-ecf6-4d24-8348-459fd162e645 Filesystems:[{Device:tmpfs Capacity:939831296 Type:vfs Inodes:127472} {Device:/dev/sda1 Capacity:19195224064 Type:vfs Inodes:2436448}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:deadline} 251:0:{Name:zram0 Major:251 Minor:0 Size:217726976 Scheduler:none}] NetworkDevices:[{Name:dummy0 MacAddress:d6:ee:d2:7a:fd:c9 Speed:0 Mtu:1500} {Name:eth0 MacAddress:08:00:27:51:d1:cf Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:56:e9:7f Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0518 22:32:22.939769    1471 manager.go:188] Version: {KernelVersion:4.4.8-boot2docker ContainerOsVersion:Boot2Docker 1.9.1 (TCL 7.0); master : 7954f54 - Wed Apr 27 17:59:58 UTC 2016 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:}
I0518 22:32:22.940511    1471 server.go:704] Watching apiserver
W0518 22:32:22.942289    1471 kubelet.go:524] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I0518 22:32:22.942315    1471 kubelet.go:369] Hairpin mode set to "hairpin-veth"
I0518 22:32:22.946259    1471 manager.go:228] Setting dockerRoot to /mnt/sda1/var/lib/docker
I0518 22:32:22.958707    1471 server.go:666] Started kubelet v0.0.0-master+$Format:%h$
E0518 22:32:22.959659    1471 kubelet.go:882] Image garbage collection failed: unable to find data for container /
I0518 22:32:22.960022    1471 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0518 22:32:22.960030    1471 manager.go:123] Starting to sync pod status with apiserver
I0518 22:32:22.960041    1471 kubelet.go:2451] Starting kubelet main sync loop.
I0518 22:32:22.960046    1471 kubelet.go:2460] skipping pod synchronization - [network state unknown container runtime is down]
I0518 22:32:22.960234    1471 server.go:117] Starting to listen on 0.0.0.0:10250
I0518 22:32:22.968053    1471 factory.go:208] Registering Docker factory
E0518 22:32:22.968069    1471 manager.go:229] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0518 22:32:22.968072    1471 factory.go:53] Registering systemd factory
I0518 22:32:22.968269    1471 factory.go:85] Registering Raw factory
I0518 22:32:22.977048    1471 manager.go:1024] Started watching for new ooms in manager
W0518 22:32:22.977124    1471 manager.go:264] Could not configure a source for OOM detection, disabling OOM events: exec: "journalctl": executable file not found in $PATH
I0518 22:32:22.978834    1471 manager.go:277] Starting recovery of all containers
I0518 22:32:22.979079    1471 manager.go:282] Recovery completed
I0518 22:32:23.074978    1471 kubelet.go:1095] Successfully registered node 127.0.0.1
W0518 22:32:25.367407    1471 nodecontroller.go:680] Missing timestamp for Node 127.0.0.1. Assuming now as a timestamp.
I0518 22:32:25.367573    1471 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController

It looks like the certificates got created... Do those timestamps line up roughly with when you tried to start it?

Do you see anything in /var/lib/localkube/certs, in the VM?

Looks to be the correct time, yes

docker@minikubeVM:~$ ls /var/lib/localkube/certs
apiserver.crt  apiserver.key

Huh, so that error:

2016/05/18 23:16:03 Error configuring authentication: Something went wrong running an SSH ``` command! command : sudo cat /var/lib/localkube/certs/apiserver.crt err : exit status 1 output : cat: can't open '/var/lib/localkube/certs/apiserver.crt': No such file or directory

is saying that there's no file at /var/lib/localkube/certs/apiserver.crt, but there clearly is :(

I'll have to think about this a little more. Does minikube delete/start work? Is this only a problem after a stop?

Unfortunately not

โžœ  minikube git:(master) ./out/minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
โžœ  minikube git:(master) ./out/minikube start
Starting local Kubernetes cluster...
Running pre-create checks...
Creating machine...
(minikubeVM) Downloading /Users/davidsmith/.minikube/cache/boot2docker.iso from https://storage.googleapis.com/tinykube/minikube.iso...
(minikubeVM) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
(minikubeVM) Creating VirtualBox VM...
(minikubeVM) Creating SSH key...
(minikubeVM) Starting the VM...
(minikubeVM) Check network to re-create if needed...
(minikubeVM) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
2016/05/18 23:49:18
Kubernetes is available at https://192.168.99.106:443.
2016/05/18 23:49:18 Error configuring authentication:  Something went wrong running an SSH command!
command : sudo cat /var/lib/localkube/certs/apiserver.crt
err     : exit status 1
output  : cat: can't open '/var/lib/localkube/certs/apiserver.crt': No such file or directory


โžœ  minikube git:(master)

Hmm, maybe a race condition? Could you try running minikube start again, without stopping anything?

Just to double-check, what commit are you at?

Hmm, it worked. Must be some sort of race condition I guess. At commit 2d459dfecd9a7e275edab6f9751dc74a4a893046

Ah, nice. I think I see it. I'll send a patch to fix it.

Hello

I have the very similar issue like yissachar had ==> missing the apiserver.crt in the minikube VM. I do not have the localkube folder too.

My setup: Win 10 home host, Virtualbox, Minikube.exe, Kubectl.exe, Docker Toolbox - which also create "default" VM in Virtualbox.

I need this setup for learning. I wanna do this ==> https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-minikube-cluster

I cannot manage to work together the ==> Docker Toolbox + Minikube + Kubectl with Virtualbox from Win 10 Home.

Thank you very much for any help.

Was this page helpful?
0 / 5 - 0 ratings