Describe the bug
Can't run web UI following official setup
To Reproduce
docker-compose.yml provided in the repodocker-compose upkubectl --kubeconfig kubeconfig.yaml create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yamlkubectl --kubeconfig kubeconfig.yaml proxyhttp://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/Expected behavior
Should be able to see the infamously slow K8S web panel
Actual behavior
steve@steve-pc /mnt/Projects/Docker/k3s> kubectl --kubeconfig kubeconfig.yaml proxy
Starting to serve on 127.0.0.1:8001
I0228 18:14:03.283083 4675 log.go:172] http: proxy error: context canceled
And then browser connection suddenly stopped
As a workaround you can port-forward the port of the pod.
@unixfox well, there is no pod:
steve@steve-pc /mnt/Projects/Docker/k3s> kubectl --kubeconfig kubeconfig.yaml proxy &
Starting to serve on 127.0.0.1:8001
steve@steve-pc /mnt/Projects/Docker/k3s> kubectl --kubeconfig kubeconfig.yaml get pod
No resources found.
@stevefan1999-personal try again with kubectl --kubeconfig kubeconfig.yaml get pod --all-namespaces=true.
@unixfox oh yes, now I do. BTW, how do you port-forward the dashboard? I tried to forward 443 but to no avail, it just tells me connection confused on the proxy port
Replace {id-pod} with the ID of the pod.
kubectl --kubeconfig kubeconfig.yaml --namespace=kube-system port-forward {id-pod} 8443
I found that command here (Google is your friend :smiley:): https://github.com/kubernetes/dashboard/issues/3038#issuecomment-410743994
Don't close the issue if your problem is solved because the bug is still relevant.
@unixfox I almost got the same but instead I don't know one port is enough rather than 8443:443馃槀
This is the same issue as
https://github.com/rancher/k3s/issues/31
What I have found is that when you run the server node with --disable-agent, which is the case in docker-compose, there is no virtual ethernet interfaces and iptables rules that get created in the server node. In consequence, the API server cannot reach the services (here the dashboard) and cannot proxy it. As a workaround, I have removed the --disable-agent in the docker-compose (you also need to add the tmpfs and privileged sections of the worker).
This also happens with Admission controller webhooks.
What I have found is that when you run the server node with
--disable-agent, which is the case in docker-compose, there is no virtual ethernet interfaces and iptables rules that get created in the server node. In consequence, the API server cannot reach the services (here the dashboard) and cannot proxy it. As a workaround, I have removed the--disable-agentin the docker-compose (you also need to add thetmpfsandprivilegedsections of the worker).This also happens with Admission controller webhooks.
k3s in docker with --disable-agent, when use metrics-server got the same situation #442
This seems like half-fixed out-of-the-box with the latest release of K3S, I was running a K3OS cluster tho. Considered it as a half-closed issue, that we haven't tried the original approach yet.
Most helpful comment
Replace
{id-pod}with the ID of the pod.I found that command here (Google is your friend :smiley:): https://github.com/kubernetes/dashboard/issues/3038#issuecomment-410743994
Don't close the issue if your problem is solved because the bug is still relevant.