Minikube: BUG REPORT: bootstrapper kubeadm fails on windows 10

Created on 10 Apr 2018  路  9Comments  路  Source: kubernetes/minikube

Environment:

Minikube version
minikube version: v0.25.2
OS
Window 10
VM Driver
"DriverName": "virtualbox"
ISO version
"Boot2DockerURL": "file://C:/Users/XXXX/.minikube/cache/iso/minikube-v0.25.1.iso",

What happened:

Ran

minikube config set bootstrapper kubeadm
minikube config set vm-driver virtualbox
minikube start --memory 20480  -v 10

Expected minikube to start but fails.

What you expected to happen:

Minikube to start

How to reproduce it (as minimally and precisely as possible):

As above

Anything else do we need to know:

It looks like https://github.com/kubernetes/minikube/blob/master/pkg/minikube/bootstrapper/kubeadm/versions.go uses filepath to build the ca cert path. I'm guessing this is happening on windows and is using back slash instead of forward slash for the path seperator.

This seems to result in followin

```# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cadvisor-port=0 --cgroup-driver=cgroupfs --fail-swap-on=false --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --authorization-mode=Webhook --client-ca-file=varliblocalkubecertsca.crt --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --hostname-override=minikube

[Install]
Wants=docker.socket
```
Note --client-ca-file=varliblocalkubecertsca.crt

https://github.com/kubernetes/minikube/blob/master/pkg/localkube/localkube.go uses path which I think from the documentation uses forward slashes.

I can get the kubelet service to come up by changing the service and restarting but only by ssh'ing in to the vm

Most helpful comment

Until the next version of minikube post 0.26.0 is released, Windows users can workaround the issue with:
minikube start --bootstrapper localkube

All 9 comments

Expected minikube to start but fails.

How does it fail? Is there a error message?

Failing for me too, but since you did not describe the symptoms I don't know if mine is the same as yours or different.

It hangs for a good while at

Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...

Then eventually I get

E0411 07:45:55.587552    6348 start.go:276] Error starting cluster:  kubeadm init error running command: sudo /usr/bin/kubeadm init --config /
var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

If I ssh into the vm and run that directly I get:

$ sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks
Flag --skip-preflight-checks has been deprecated, it is now equivalent to --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.9.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 17.03
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING FileExisting-crictl]: crictl not found in system path
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
        - There is no internet connection, so the kubelet cannot pull the following control plane images:
                - gcr.io/google_containers/kube-apiserver-amd64:v1.9.4
                - gcr.io/google_containers/kube-controller-manager-amd64:v1.9.4
                - gcr.io/google_containers/kube-scheduler-amd64:v1.9.4

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster

journalctl shows

Apr 11 06:55:35 minikube systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Apr 11 06:55:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Apr 11 06:55:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Apr 11 06:55:35 minikube kubelet[6761]: Flag --require-kubeconfig has been deprecated, You no longer need to use --require-kubeconfig. This will be removed in a future version. Providing --kubeconfig enables API server mode, omitting --kubeconfig enables standalone mode unless --require-kubeconfig=true is also set. In the latter case, the legacy default kubeconfig path will be used until --require-kubeconfig is removed.
Apr 11 06:55:35 minikube kubelet[6761]: I0411 06:55:35.654032    6761 feature_gate.go:226] feature gates: &{{} map[]}
Apr 11 06:55:35 minikube kubelet[6761]: I0411 06:55:35.654089    6761 controller.go:114] kubelet config controller: starting controller
Apr 11 06:55:35 minikube kubelet[6761]: I0411 06:55:35.654096    6761 controller.go:118] kubelet config controller: validating combination of defaults and flags
Apr 11 06:55:35 minikube kubelet[6761]: [131B blob data]
Apr 11 06:55:35 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Apr 11 06:55:35 minikube systemd[1]: kubelet.service: Unit entered failed state.
Apr 11 06:55:35 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.

systemctl status shows the bad ca cert path I mentioned

# systemctl status kubelet
芒 kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           芒芒10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Wed 2018-04-11 06:57:38 UTC; 2s ago
     Docs: http://kubernetes.io/docs/
  Process: 7347 ExecStart=/usr/bin/kubelet --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=
                                                                                                                      ar\lib\localkube\certs\ca.crt --cadvisor-port=0 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=minikube (code=exited, status=1/FAILURE)
 Main PID: 7347 (code=exited, status=1/FAILURE)
      CPU: 94ms

Apr 11 06:57:38 minikube systemd[1]: kubelet.service: Unit entered failed state.
Apr 11 06:57:38 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.

If you run the kubelet command directly like this

/usr/bin/kubelet --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cadvisor-port=0 --cgroup-driver=cgroupfs --fail-swap-on=false --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --authorization-mode=Webhook --client-ca-file=\var\lib\localkube\certs\ca.crt --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --hostname-override=minikube

you get

# /usr/bin/kubelet --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cadvisor-port=0 --cgroup-driver=cgroupfs --fail-swap-on=false --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --authorization-mode=Webhook --client-ca-file=\var\lib\localkube\certs\ca.crt --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --hostname-override=minikube
Flag --require-kubeconfig has been deprecated, You no longer need to use --require-kubeconfig. This will be removed in a future version. Providing --kubeconfig enables API server mode, omitting --kubeconfig enables standalone mode unless --require-kubeconfig=true is also set. In the latter case, the legacy default kubeconfig path will be used until --require-kubeconfig is removed.
I0411 06:59:31.939816    8049 feature_gate.go:226] feature gates: &{{} map[]}
I0411 06:59:31.939906    8049 controller.go:114] kubelet config controller: starting controller
I0411 06:59:31.939917    8049 controller.go:118] kubelet config controller: validating combination of defaults and flags
error: unable to load client CA file varliblocalkubecertsca.crt: open varliblocalkubecertsca.crt: no such file or directory

Hope this helps...

Thank you for that. Mine is also about certs but the error is different. I might file mine separately then.

馃憤 also experiencing this after upgrading to 0.26.0, trying to create fresh new minikube cluster

While https://github.com/kubernetes/minikube/pull/2702 fixes the path issue I still ultimately failed to get a cluster up and running with the kubeadm bootstrapper at 0.26.0. I made the changes against the v0.25.2 tag here https://github.com/awalker125/minikube/tree/BUG2926_0_25_2 and built a binary from that. This got further but depending on the k8s version had various other issues. I've ran out of time at the minute to look at this any further. I've reverted back to localkube on the 0.25.2 and have managed to get RBAC working (that was the reason I switched to kubeadm in the first place.)

In case anyone else has a similar requirement to me (RBAC enabled) I did this.

minikube start --memory 20480 --extra-config=apiserver.Authorization.Mode=RBAC --kubernetes-version v1.9.4

#in another shell
export no_proxy="127.0.0.1,$(minikube ip)"
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

Fixed with #2702

Until the next version of minikube post 0.26.0 is released, Windows users can workaround the issue with:
minikube start --bootstrapper localkube

As a note, this resolves the same issue on Windows 7 as well

Resolved as well for me using minikube start --bootstrapper localkube --vm-driver="hyperv" --hyperv-virtual-switch="Default Switch" on windows 10 1803.

Was this page helpful?
0 / 5 - 0 ratings