Dashboard version: Latest from URL below
Kubernetes version: v1.9.3
Operating system: Ubuntu 16.04.2 LTS
Node.js version:
Go version:
I have seen many similar error reports and tried to follow the advice, but no, it does not work for me. The guide here https://github.com/kubernetes/dashboard seem to be overly simplistic. There is a lot more documentation on authentication, but it remains unclear how much of that is needed to simply get access to the GUI in a lab environment.
Install cluster:
sudo kubeadm init --kubernetes-version=$KUBEVERSION --apiserver-advertise-address=$HOSTIP
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
Install Dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Run proxy:
kubectl proxy
Then from same host:
curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Error: 'Forbidden'
Trying to reach: 'https://192.168.216.47:8443/'
Some HTML
I tried creating cluster role bindings as described here https://github.com/kubernetes/dashboard/wiki/Access-control, but with the same result.
$ kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 24m
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 24m
kubernetes-dashboard ClusterIP 10.110.228.155 <none> 443/TCP 18m
I'm assuming that this Forbidden error comes from apiserver and is not related to Dashboard at all. Try to access some other application through service proxy i.e. http://localhost:8001/api/v1/namespaces/kube-system/services/grafana/proxy/. If you will see the same error then you have to configure your cluster and cluster user properly first, before accessing any applications.
I can access other APIs without getting "Forbidden"; the one you mentioned:
$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/grafana/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"grafana\" not found",
"reason": "NotFound",
"details": {
"name": "grafana",
"kind": "services"
},
"code": 404
}
That was only an example. I don't know what applications you have installed in your cluster. Grafana service clearly does not even exist that is why different error is thrown. Try to access application through service that actually exists...
Well, maybe I'm completely off now but you suggested the error comes from the API server and I am trying to demonstrate that I can interact with the API server through the proxy without getting a "Forbidden". Here is another example:
$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/kube-dns
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kube-dns",
...
}
}
I do not have any "applications" in that cluster yet. The fault is reproducible with the actual steps I listed, so I am really working on a fresh new cluster.
Privileges for services and services/xxx/proxy are granted by different rules. This doesn't prove you have access.
I see.
$ curl http://localhost:8001/api/v1/namespaces/monitoring/services/grafana/proxy/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
(...)
Works too.
Interesting. I'll check this scenario tomorrow.
I have tried this scenario and could not reproduce it.
# Kubeadm
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
# Kubectl
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ sudo kubeadm init --apiserver-advertise-address=192.168.30.160
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [floreks kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.30.160]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 27.501421 seconds
[uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node floreks as master by adding a label and a taint
[markmaster] Master floreks tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: c1e23f.92d5ac0827b15697
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token c1e23f.92d5ac0827b15697 192.168.30.160:6443 --discovery-token-ca-cert-hash sha256:99cc2ff84df5371afccdc365cd00e7643d225ba58c29a074f4450203df43e097
$ sudo cp /etc/kubernetes/admin.conf /home/floreks/.kube/config
$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-kube-controllers" created
deployment "calico-policy-controller" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-kube-controllers" created
clusterrole "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created
$ kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
node "floreks" untainted
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
# In a different shell
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
<!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.93e259f7.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
<p class="browsehappy">You are using an <strong>outdated</strong> browser.
Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
experience.</p>
<![endif]--> <kd-login layout="column" layout-fill="" ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill="" ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.b5ad51ac.js"></script> </body> </html>
Very strange. When I try the same commands I get different results. What could it be? Some pre-existing software on the host? Proxy settings? Here is my sequence of commands with full printouts.
$ sudo kubeadm reset
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
$ sudo kubeadm init --apiserver-advertise-address=10.78.34.12
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubetest kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.78.34.12]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 31.003582 seconds
[uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubetest as master by adding a label and a taint
[markmaster] Master kubetest tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 9970ef.e55b29f5e5781795
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 9970ef.e55b29f5e5781795 10.78.34.12:6443 --discovery-token-ca-cert-hash sha256:941f3eede5bda024af8ab382e4dc8d753da828129ad503dbb3cde59810efc094
$ sudo cp /etc/kubernetes/admin.conf ~/.kube/config
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-kube-controllers" created
deployment "calico-policy-controller" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-kube-controllers" created
clusterrole "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created
$ kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
node "kubetest" untainted
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
$ kubectl proxy % other shell
$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Error: 'Forbidden'
Trying to reach: 'https://192.168.216.61:8443/'$
Definitely looks like something is blocking the traffic and I think request does not even reach the Dashboard. It might be some preexisting configuration or maybe it is some IP class mismatch as error shows 192.xxx and advertise address is in 10.xxx class.
192.168.216.61 is the dashboard's endpoint IP address. When I try to curl to the pod directly, I get a sequence of characters that notepad++ translates into NAK,ETX,SOH,STX,STX, see below.
I also checked the logs of the dashboard pod, but there is nothing but an endless sequence of "Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds."
Lastly I tried to follow the troubleshooting guide here https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ but could not find anything suspicious in my cluster.
$ kubectl describe svc/kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.60.94
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 192.168.216.61:8443
Session Affinity: None
Events: <none>
$ kubectl describe po/kubernetes-dashboard-5bd6f767c7-4l6p7 -n kube-system
Name: kubernetes-dashboard-5bd6f767c7-4l6p7
Namespace: kube-system
Node: kubetest/10.78.34.12
Start Time: Tue, 27 Feb 2018 11:15:11 +0100
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=1682932373
Annotations: <none>
Status: Running
IP: 192.168.216.61
Controlled By: ReplicaSet/kubernetes-dashboard-5bd6f767c7
Containers:
kubernetes-dashboard:
Container ID: docker://e7fe24c1f31beb22a6efa9bbde101d4b2d175c253ed1c07e9b5dc35666bb5b09
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:dc4026c1b595435ef5527ca598e1e9c4343076926d7d62b365c44831395adbd0
Port: 8443/TCP
Args:
--auto-generate-certificates
State: Running
Started: Tue, 27 Feb 2018 11:15:13 +0100
Ready: True
Restart Count: 0
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-5968f (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-5968f:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-5968f
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
$ curl --noproxy 192.168.216.61 http://192.168.216.61:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
http://192.168.216.61:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Here, you are trying to connect to a HTTPS endpoint using HTTP protocol. That is why it is throwing this weird series of characters. It should start with https://....
So what's the URL I should be using? This does not seem to work either.
$ curl -k --noproxy 192.168.216.61 https://192.168.216.61:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
404: Page Not Found
This message is correct. Dashboard pod IP is 192.168.216.61 and it points you directly to Dashboard. api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ suffix is required when you want to access some app through API server proxy. Here you are connecting directly to the app. You just need to use <podIP>:<applicationPort>.
Tried things in a VM that is not behind a proxy and everything works fine. Note though that I did have localhost, host_IP, and 10.96.0.0/12 included in the no_proxy env and also "kubeadm init" did not give a proxy warning.
Proposing to close the issue.
@danielcra - so what was the fix? By proxy you mean corporate proxy you are behind? Did you add pod’s cluster ip to no_proxy env variable on your client - it doesn’t make sense to me
I was also suffering from this problem, and was able to solve it.
The root cause is that the file /etc/kubernetes/manifests/kube-apiserver.yaml had an env: section where the proxy environment variables were set. kubectl proxy is going via the API server, so any subsequent requests by the API server reverse proxy were using the defined proxy server (and possibly ignoring the no_proxy setting).
Given that the API server doesn't usually need to use an external proxy for anything, the solution was to simply delete the env: section from kube-apiserver.yaml and then run sudo systemctl restart kubelet to trigger the launch of a fresh non-proxy-using API server.
@dhague
HI, flow your recommended ,show below error:
http: proxy error: dial tcp xxx.xxx.xxx.xxx:6443: getsockopt: connection refused
Hi folks, my issue is a little different and yet may still have some relevance to what has been discussed here so far, so let me share it with you all.
My cluster runs on three VMware VMs, the control plane is on ubun1811 (192.168.42.161/24)
In order to reach dashboard from a remote host, I did the following:
1. Edited /etc/kubernetes/manifests/kube-apiserver.yaml to have the these two entries under spec -> containers -> command -> kube-apiserver
- --advertise-address=192.168.42.161
- --etcd-servers=https://192.168.42.161:2379
Restart kubelet after the change
As a result, I am able to:
The remaining problem that I am still having is this:
Using Chome, when I open it to http://ubun1811:8011, I got curl output dump, no web pages displayed, Using IE, when I open it to http://ubun1811:8011, “cannot reach this page” error remains
I will continue trouble-shooting and see what’s missing in my dashboard. Let anyone can shed a light on the remaining issue for me, it would be greatly appreciated!
The same thing happened to me, but fixed it after starting the proxy service disabling request filtering kubectl proxy --disable-filter=true --address='192.168.0.27', also specifying the IP address just to not use localhost
Most helpful comment
Hi folks, my issue is a little different and yet may still have some relevance to what has been discussed here so far, so let me share it with you all.
My cluster runs on three VMware VMs, the control plane is on ubun1811 (192.168.42.161/24)
In order to reach dashboard from a remote host, I did the following:
1. Edited /etc/kubernetes/manifests/kube-apiserver.yaml to have the these two entries under spec -> containers -> command -> kube-apiserver
- --advertise-address=192.168.42.161
- --etcd-servers=https://192.168.42.161:2379
Restart kubelet after the change
kubectl proxy --address='192.168.42.161' \
--accept-hosts='^localhost$,^127.0.0.1$,^192.168.42.157$,^192.168.0.11$,^[::1]$' \
--disable-filter=true \
--port=8011
As a result, I am able to:
The remaining problem that I am still having is this:
Using Chome, when I open it to http://ubun1811:8011, I got curl output dump, no web pages displayed, Using IE, when I open it to http://ubun1811:8011, “cannot reach this page” error remains
I will continue trouble-shooting and see what’s missing in my dashboard. Let anyone can shed a light on the remaining issue for me, it would be greatly appreciated!