Version:
k3s version v1.0.0 (18bd921c)
Describe the bug
I want to use helm version 3 with k3s but i when type helm install stable/postgresql --generate-name for example i get:
Error: Kubernetes cluster unreachable
To Reproduce
helm repo add stable https://kubernetes-charts.storage.googleapis.com/helm repo updatehelm install stable/postgresql --generate-nameExpected behavior
Installation should work.
Actual behavior
Error: Kubernetes cluster unreachable
Additional context
Same issue here on k3s version v1.0.0 (18bd921c).
Try setting the KUBECONFIG environment variable.
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
That worked for me, but I only tried on a fresh CentOS/k3s/Helm install. Thanks @grawin
Same issue. The fix @grawin posted doesn't solve it for me though.
EDIT: I also tried the steps in https://github.com/ibrokethecloud/rancher-helm3 but to no avail.
The fix @grawin posted doesn't worked for me either, i'am using a ubuntu 18.04 system.
If you add "-v 20" to your helm command line it will show it's connecting to port 8080.
Running this seems to fix it:
kubectl config view --raw >~/.kube/config
This lets helm use the same config kubectl is using I think.
@rubiktubik looks like helm can't reach the k3s cluster, can you try to use --kubeconfig with helm command or using ~/.kube/config as @sixcorners suggested, please reopen the issue if the problem still persists.
If you add "-v 20" to your helm command line it will show it's connecting to port 8080.
Running this seems to fix it:
kubectl config view --raw >~/.kube/configThis lets helm use the same config kubectl is using I think.
can confirm this solution works for me as well
This resolved the error message for me.
sudo helm install harbor/harbor --version 1.3.0 --generate-name --kubeconfig /etc/rancher/k3s/k3s.yaml
with k3s i had the same problem, system tells me the file
/etc/rancher/k3s/k3s,yaml
is not reachable
the file had rw for root i change to 744 and it works. please tell me if it is correct,
If you are using sudo, be aware that this command doesn't preserve environment variables (such as KUBECONFIG) by default when switching to a different context.
If you wish to preserve specific environment variables when using sudo then:
cat << EOF > /etc/sudoers.d/env
Defaults env_keep += "http_proxy https_proxy no_proxy"
Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY"
Defaults env_keep += "KUBECONFIG"
EOF
If you are using
sudo, be aware that this command doesn't preserve environment variables (such asKUBECONFIG) by _default_ when switching to a different context.If you wish to preserve specific environment variables when using
sudothen:cat << EOF > /etc/sudoers.d/env Defaults env_keep += "http_proxy https_proxy no_proxy" Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY" Defaults env_keep += "KUBECONFIG" EOF
Just use sudo -E, which will preserve the environment variables.
@Vesnica thanks that worked for me.
I'm using helm 3.2.4 on Windows and have the same issue. Setting environment variable KUBECONFIG didn't help either. Without --kubeconfig it doesn't fail but it returns no result on helm ls
what worked for me is to set the KUBECONFIG with absoult path value after changing directory to the chart directory.
If you add "-v 20" to your helm command line it will show it's connecting to port 8080.
Running this seems to fix it:
kubectl config view --raw >~/.kube/configThis lets helm use the same config kubectl is using I think.
This kubectl config view --raw >~/.kube/config works for me. thaks
I tried this command,
kubectl config view --raw >~/.kube/config
but after running this config file became empty.
Can anyone suggest how to recover my config file with all values?
@poojabolla.. Its gone, you must use >> instead > on appending something in existing file
Most helpful comment
Try setting the KUBECONFIG environment variable.
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml