chectl fails to start server on k8s (coreos tectonic) cluster with authentication enabled.
chectl --version
chectl/0.0.20191121-next.89a1444 darwin-x64 node-v10.17.0
chectl server:start
→ Failed to connect to Kubernetes API. Unauthorized
👀 Looking for an already existing Che instance
› Error: Failed to connect to Kubernetes API. Unauthorized
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:34Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6+coreos.2", GitCommit:"0c227501efd8f0c62e5f75049ad7abb5a1d801ac", GitTreeState:"clean", BuildDate:"2019-02-02T03:18:42Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl configuration file in default location: ~/.kube/config
kubectl get nodes
NAME AGE
k-master01.domain.local 594d
k-node01.domain.local 594d
k-node02.domain.local 594d
k-node03.domain.local 594d
k-node04.domain.local 594d
k-node05.domain.local 594d
k-node06.domain.local 594d
@rjbaucells
Try os login before starting che server.
What is os login?
I mean, at first you might need to login to your k8s cluster.
Same here. kubectl works fine, chectl fails (unauthorized).
Kubernetes Version: v1.16.3
tried with following versions:
chectl/7.4.0
chectl/0.0.20191127-next.97b31fb
$ chectl server:start --platform=k8s --installer=helm --multiuser
[00:04:55] Verify Kubernetes API [started]
[00:04:56] Verify Kubernetes API [failed]
[00:04:56] → Failed to connect to Kubernetes API. E_K8S_API_UNAUTHORIZED - Message: must authenticate
» Error: Failed to connect to Kubernetes API. E_K8S_API_UNAUTHORIZED -
» Message: must authenticate
while digging a little deeper, this might be caused by the checkKubeApi-Function
I was using a rancher created k8s cluster.
apiVersion: v1
kind: Config
clusters:
- name: "dev"
cluster:
server: "https://rancher.k8s.local/k8s/clusters/c-9fvlj"
certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM3akNDQ\..."
- name: "dev-production-1"
cluster:
server: "https://10.49.70.175:6443"
certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQ\ ..."
users:
- name: "dev"
user:
token: "kubeconfig-user-rxjjf.c-9fvlj:2ccg6s2jgjs...mw"
contexts:
- name: "dev"
context:
user: "dev"
cluster: "dev"
- name: "dev-production-1"
context:
user: "dev"
cluster: "dev-production-1"
current-context: "dev"
The endpoint seems to be fine, but getDefaultAccountServiceToken() is returning the default kubernetes service account. I guess in my case that might be caused by ranchers authentication handling.
https://rancher.k8s.local/k8s/clusters/c-9fvlj/healthz,
» eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2...
>> base 64 decode
{"alg":"RS256","kid":""}{"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"default","kubernetes.io/serviceaccount/secret.name":"default-token-q7wz4","kubernetes.io/serviceaccount/service-account.name":"default","kubernetes.io/serviceaccount/service-account.uid":"95e84a44-1af1-43b2-85c8-eb4fd6c0bf93","sub":"system:serviceaccount:default:default"}0q3S>)AQf5WpF4 4#F-rt
}&@rvhÒ±BSÞ°W$F/_3IT*...
In order to get it running I changed the token to the one in my kubecfg file and everything deploys fine. token="kubeconfig-user-rxjjf.c-9fvlj:2ccg6s2jgjs...mw" instead of getDefaultAccountServiceToken()
Would it be an option to use the token given by the kubeconfig file as an alternative, if service account authentication fails (401)?
@nils-mosbach how did you manage to change it to your own token? I've searched the code for getDefaultAccountServiceToken, but couldn't find anything.
In die current chectl master branch its in /src/api/kube.ts on line 1071.
For testing purposes I hard coded my token instead of the function call.
async checkKubeApi() {
const currentCluster = this.kc.getCurrentCluster()
if (!currentCluster) {
throw new Error('Failed to get current Kubernetes cluster: returned null')
}
/**
I changed the following line to something like
const token = "MY_SERVICE_ACCOUNT_TOKEN";
**/
const token = await this.getDefaultServiceAccountToken()
const agent = new https.Agent({
rejectUnauthorized: false
})
let endpoint = ''
try {
endpoint = `${currentCluster.server}/healthz`
// ...
}
}
Replicating the steps of getDefaultServiceAccountToken() ...
> kubectl get serviceaccounts
NAME SECRETS AGE
default 1 59d
> kubectl describe serviceaccounts default
Name: default
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-q7wz4
Tokens: default-token-q7wz4
Events: <none>
> kubectl describe secret default-token-q7wz4
Name: default-token-q7wz4
Namespace: default
Labels: <none>
Annotations: field.cattle.io/projectId: c-9fvlj:p-d648t
kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: 95e84a44-1af1-43b2-85c8-eb4fd6c0bf93
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1017 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9....
While I was writing this, the function getDefaultServiceAccountToken got refactored in the current master. So I pulled the latest release 0.0.20200211-next.1744186, but still the same.
> chectl server:start --platform k8s --multiuser --self-signed-cert --domain k8s.local --chenamespace dev
× Verify Kubernetes API
→ Failed to connect to Kubernetes API. E_K8S_API_UNAUTHORIZED - Message: must authenticate
👀 Looking for an already existing Eclipse Che instance
» Error: Failed to connect to Kubernetes API. E_K8S_API_UNAUTHORIZED - Message: must authenticate
Is there a way to have this utilize the RBAC construct, or at least inherit the token from the kubeconfig file? I'm encountering this issue too
Hi,
I am using Rancher too and I have the same problem.
I have tried to modify the file kube.js (I suppose transpiled from kube.ts but I get again must authenticate error.
Can you help me?
Do I need to encode the secret with base64?
Can someone kindly help me?
It is a month that I have built a k8s/rancher cluster just to install Eclipse/CHE and I have not reached that goal.
Can someone kindly help me?
It is a month that I have built a k8s/rancher cluster just to install Eclipse/CHE and I have not reached that goal.
me too
I'm running into the same issue, installing che on Rancher.
As far as I understand the main issue is how Rancher proxies the kube-api server. @nils-mosbach pointed to the code of chectl, which queries the token of the default service account in the default namespace. This token is used by chectl to authenticate to the kube-api Server. Unfortunately this API token is only valid for "internal" requests to the API server.
Rancher adds additional authentication and authorization mechanisms in front of the Kube API server. Therefore the tokens are validated by rancher (not by the Kube API server of the cluster). Since the service account token is not known by Rancher, the request of chectl is not forwarded to the internal Kube API server.
If possible in your environemnt try to directly access the internal Kube API server (e.g. by adding an additional NodePort Service to the Kubernetes API). At least for installing che.
We've added --skip-kubernetes-health-check flag to skip that kind of pre-flight check.
So, please update to the latest version:
chectl update next
and try again.
@tolusha Thanks! That solved my issue.
I am also running into this issue. However, in my case, I am connecting to the k8s cluster using client certificates with a named user instead of the default accounts.
I am able to access and work with the cluster using kubectl from the command line without any problems. But using chectlany commands to check namespaces, check cluster health or other things using the default access is failing.
Is there a way I can ensure that chectl uses the correct context to get around this?
Right now I am unable to install Che on the customer cluster I am working on.
The customer cluster is using kubernetes 1.15, and this is not an issue, based on my testing on my own 1.15 based cluster.
I have tried skipping the health check - but it seems other cluster access commands are failing (like checking if the che namespace exists - which it does as I need to have it available with a certificate created before I start the installation.
I also have the same issue, I use PKS to connect to Kubernetes Cluster.
@asavin-cl
Which version of chectl do you use?
Have you tried --skip-kubernetes-health-check flag ?
Hi yes, I tried that. But the problem in getting access token from PKS authentication service.
I used this way to do the deployment.
I have connected to k8s cluster used the PKS authentication. Then I created kubectl proxy connection and used an internal k8s service account and deployed Che used chectl through this connection.
For example it way:
kubectl create sa deployer
kubectl create clusterrolebinding deployer --clusterrole cluster-admin --serviceaccount default:deployer
KUBE_DEPLOY_SECRET_NAME=`kubectl get sa deployer -o jsonpath='{.secrets[0].name}'`
KUBE_API_TOKEN=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.token}'|base64 --decode`
KUBE_API_CERT=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.ca.crt}'|base64 --decode`
kubectl proxy &
export KUBECONFIG=~/.kube/config-deployer
echo $KUBE_API_CERT > deploy.crt
kubectl config set-cluster k8s --server=http://127.0.0.1:8001 --certificate-authority=deploy.crt --embed-certs=true
kubectl config set-credentials k8s-deployer --token=$KUBE_API_TOKEN
kubectl config set-context k8s --cluster k8s --user k8s-deployer
kubectl config use-context k8s
kubectl get all # it actually works!
Can someone kindly help me?
It is a month that I have built a k8s/rancher cluster just to install Eclipse/CHE and I have not reached that goal.
Hi, I installed in rancher as flow step:
1.Enabled rancher "Authorized Cluster Endpoint"
2.connect rancher via kubectl use context k8s master config
3.run kubectl proxy
-->service start on 127.0.0.1:8001
4.add context to kubectl config file like this
- cluster:
certificate-authority-data: DATA+OMITTED
server: http://localhost:8001
5.open new cmd window
6.switch kubectl context to new added in step 4
6.excute chectl commond to install che