Unable to start MultiUser Che with Helm Chart.
helm upgrade --install che --namespace che --set global.multiuser=true --set global.cheDomain=192.168.99.100.nip.io ${PROJECTS}/che/deploy/kubernetes/helm/che
Expected: MultiUser Che is Deployed.
Actual:

OS and version:
Diagnostics:
@guydaichs @perspectivus1 WDYT guys?
Did you follow the updated minikube startup instructions?
https://github.com/eclipse/che/tree/master/deploy/kubernetes/helm/che#prerequisites
Specifically:
minikube start --cpus 2 --memory 4096 --extra-config=apiserver.Authorization.Mode=RBAC
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
Also, global.cheDomain was replaced with global.ingressDomain. And, there is a bootstrap issue in the nightly image right now: https://github.com/eclipse/che/issues/9428.
I'm afraid no. Thanks for pointing.
I'll check with updated instructions and close the issue.
P.S. It becomes an exercise to start Kubernetes test environment for Che and follow the instruction changes.
@sleshchenko
I agree. This also relates to one of the issues in https://github.com/eclipse/che/issues/5908
The values files should make it a bit easier to configure all the values.
I think that a script that configures minikube, and then calls helm (passing minikube-ip as a param), will greatly simplify things.
WDYT?
Minikube fails to start if I specified --extra-config=apiserver.Authorization.Mode=RBAC. See console output.
Console output
[serg@sleschenko kubernetes]$ minikube version minikube version: v0.26.0
[serg@sleschenko kubernetes]$ minikube start --cpus 2 --memory 4096 --extra-config=apiserver.Authorization.Mode=RBAC
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
150.53 MB / 150.53 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0416 12:27:10.470749 8447 start.go:276] Error starting cluster: kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap
output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[certificates] Using the existing ca certificate and key.
[WARNING Swap]: running with swap on is not supported. Please disable swap
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
[WARNING ExtraArgs]: kube-apiserver: failed to parse extra argument --Authorization.Mode=RBAC
[certificates] Using the existing apiserver certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [minikube] and IPs [192.168.99.100]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- Either there is no internet connection, or imagePullPolicy is set to "Never",
so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.10.0
- k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
- k8s.gcr.io/kube-scheduler-amd64:v1.10.0
- k8s.gcr.io/etcd-amd64:3.1.12 (only if no external etcd endpoints are configured)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap
.: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
@guydaichs Maybe you have a suggestion what I can try to do to fix this issue.
I tested this on minikube v0.25.0. Can you try that?
@guydaichs Thanks, it works on v0.25.0. So we should document minikube version in this case.
If it doesn't work on minikube 0.26.0 it is definitely a bug that should be addressed.
It should also work with minikube v0.26.*
There was a change in the configuration parameter name: https://github.com/kubernetes/minikube/issues/2712
from:
--extra-config=apiserver.Authorization.Mode=RBAC
to:
--extra-config=apiserver.authorization-mode=RBAC
@garagatyi @sleshchenko
Oh, I see. Thank you for the explanation. Then we should add a comment to the readme that explains that user should not use 0.26.0
are we going to wait until way rename property back?
Then we should add a comment to the readme that explains that user should not use 0.26.0
I would say that we should add a comment that user should use
--extra-config=apiserver.authorization-mode=RBAC for 0.26.0 and greater
--extra-config=apiserver.Authorization.Mode=RBAC for other
I added different instructions for minikube 0.26.0+ and 0.25.2. So I think the issue can be closed.