Failed to connect to kubernetes: invalid configuration: no configuration has been provided
The output of jx version is:
ubuntu@master:~$ jx version
Failed to connect to kubernetes: invalid configuration: no configuration has been provided
NAME VERSION
jx 1.3.123
kubectl v1.10.3
helm client v2.9.1+g20adb27
helm server v2.9.1+g20adb27
git git version 2.7.4
What kind of Kubernetes cluster are you using & how did you create it?
run api-server etc. by systemd
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config
ubuntu@master:~$ cat .kube/config
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/k8s-root-ca.csr
server: 10.1.129.8:6443
name: kubernetes
contexts:
- context:
cluster: default
user: kubectl
name: default
- context:
cluster: default
namespace: jx
user: jx
name: jx
current-context: default
preferences: {}
users:
- name: kubectl
user:
client-certificate: /etc/kubernetes/ssl/kubelet-client.crt
client-key: /etc/kubernetes/ssl/kubelet-client.key
ubuntu 16.04
CLI:
ubuntu@master:~$ jx version
Failed to connect to kubernetes: invalid configuration: no configuration has been provided
NAME VERSION
jx 1.3.123
kubectl v1.10.3
helm client v2.9.1+g20adb27
helm server v2.9.1+g20adb27
git git version 2.7.4
jubuntu@master:~$ jx compliance run
error: could not create the compliance client: compliance client failed to load the Kubernetes configuration: invalid configuration: no configuration has been provided
ubuntu@master:~$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
Kibana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kibana-logging/proxy
CoreDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ubuntu@master:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/k8s-root-ca.csr
server: 10.1.129.8:6443
name: kubernetes
contexts:
- context:
cluster: default
user: kubectl
name: default
- context:
cluster: default
namespace: jx
user: jx
name: jx
current-context: default
kind: Config
preferences: {}
users:
- name: kubectl
user:
client-certificate: /etc/kubernetes/ssl/kubelet-client.crt
client-key: /etc/kubernetes/ssl/kubelet-client.key
same for me
jx get git
WARNING: The current user cannot query secrets in the namespace default: Failed to create a kubernetes client invalid configuration: no configuration has been provided
what is the output of these commands:
echo $KUBECONFIG
kubectl get ns
@jstrachan
echo $KUBECONFIG
nothing.. happen
kubectl get ns
the server doesn't have a resource type "ns"
@enkicoma it sounds like you don't have a valid kubernetes cluster setup to me
@jstrachan yep, I don't because of... I want to create a git user and token for bitbucket.. jx create cluster gke --skip-login --default-admin-password=mySecretPassWord123 -n myclustername will require git username and token. If I don't have configured before this command jx will not prompt to create a new token and I will not have any repositories created at the end.
you should be able to do the jx create git server ... and jx create git token ... CLIs and ignore the errors ;) - they should still update the local ~/.jx/gitAuth.yaml file - so you can use the git server/token when creating a cluster.
ideally we'd have a CLI option like --no-cluster or something for the jx create git server|user that doesn't try to talk to the cluster to avoid the errors
@jstrachan ok, will try
jx create git user -n BitBucket someUserName -p somepassword -e [email protected]
WARNING: The current user cannot query secrets in the namespace default: Failed to create a kubernetes client invalid configuration: no configuration has been provided
error: invalid configuration: no configuration has been provided
@enkicoma did it update your ~/.jx/gitAuth.yaml file? If so you can quietly ignore that warning + error for now ;)
Am hoping we can make this a little less confusing - I think the --no-cluster flag should help lots
@jstrachan I will try today after work.. many thanks for the support!
@jstrachan
In my case
echo $KUBECONFIG
nothing.. happen
kubectl get ns
NAME STATUS AGE
default Active 45d
kube-public Active 45d
kube-system Active 45d
pls check on this
@nagarajui7 that looks like that kubernetes cluster does not have Jenkins X installed at all? There's no jx namespace.
You probably need to install Jenkins X first: https://jenkins-x.io/getting-started/install-on-cluster/
inconsistent context/cluster generated?
clusters:
It seems that the cluster context is not properly configured.
echo $KUBECONFIG
/etc/kubernetes/admin.conf
kubectl get ns
W0426 09:51:32.538172 1775 loader.go:223] Config not found: /etc/kubernetes/admin.conf
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
i to am having this issue
cmd = echo $KUBECONFIG
results = :/root/.kube/config
cmd = kubectl get ns
results = W0504 02:12:47.037118 187212 loader.go:223] Config not found: /root/.kube/config
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
much help is appreciated
thanks in advance
I had a similar issue, where my KUBECONFIG was infact pointing to my contexts, but the mapping from context to cluster was wrong.
Where I had my env-a's context pointing to cluster: default instead of cluster: cluster-a, as mine looked like this:
clusters:
- cluster:
certificate-authority-data: xx
server: https://x.x.x.x:6443
name: cluster-a
- cluster:
certificate-authority-data: y
server: https://y.y.y.y:6443
name: cluster-b
contexts:
- context:
cluster: default
namespace: default
user: user-a
name: env-a
- context:
cluster: cluster-b
namespace: default
user: user-b
name: env-b
After changing this bit, it worked:
contexts:
- context:
cluster: cluster-a
namespace: default
user: user-a
name: env-a
Most helpful comment
echo $KUBECONFIG
/etc/kubernetes/admin.conf
kubectl get ns
W0426 09:51:32.538172 1775 loader.go:223] Config not found: /etc/kubernetes/admin.conf
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable