I followed the steps to install and setup the proxy as detailed in the README.md
When I point my webbrowser at the proxy address I just get back an "unauthorized" response.
Dashboard version: latest as of 5/28/16
Kubernetes version: 1.2.2
Operating system: core
All other kubectl commands work correctly.
kubectl cluster-info
Kubernetes master is running at https://kub2.drewoconnor.com
Heapster is running at https://kub2.drewoconnor.com/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://kub2.drewoconnor.com/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://kub2.drewoconnor.com/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Dashboard
Can you kubectl get pods --namespace=kube-system and kubectl logs <pod-of-dashboard> --namespace=kube-system?
Hi bryk, I found the issue here.
I was using the kubectl proxy command as noted above on an ubuntu server, specifying the IP address in the command. I was when pointing a browser on an OSX machine at the ubuntu server's IP. The connection worked, and I got the unauthorized response. I found that if i used "kubectl --port=9090" and then did a wget to localhost, the request worked as it should.
This appears to be an issue with the proxy command in kubectl. Or perhaps they intended to limit connections to localhost.
I don't believe this is an issue with the web ui.
Thanks,
Drew
@ScubaDrew Yeah, that's what I expected. I'm closing this issue. Please reopen if needed.
I have the same issue .
$kubectl proxy --port=9090
Starting to serve on 127.0.0.1:9090
curl 127.0.0.1:9090
<h3>Unauthorized</h3>
curl 127.0.0.1:9090/ui
<a href="/ui/">Moved Permanently</a>.
What can i do, thanks
Yeah, that's correct that it is moved permanently. Can you open the URL with a browser?
@bryk
$kubectl proxy --address="10.2.0.10" --port=9090
Starting to serve on 10.2.0.10:9090
or
$kubectl proxy --address="0.0.0.0" --port=9090
Starting to serve on 0.0.0.0:9090
in browser window
http://10.2.0.10:9090/ui
@EamonZhang it only works when the url is localhost. Accessing it via IP address is... Unauthorized by design.
@ScubaDrew
Server computer have no browser to be used.
Does nginx proxy support . or other measures ?
Thanks
If your master is publicly accessible you can access it to see the UI: https://master/ui or your clients can use kubectl proxy on their machines. Finally, you can expose the UI as external service and access it from outside world.
I have the master running on a public IP but get unauthorized un http://ip/ui
the dashboard pod self is not running on the master node. But another one.
kubectl cluster-info does not show the kubernetes-dashboard. But the service is listed and the pod as well
How do I expose the UI? any hints?
I have the same issue as hwinkel above. Just installed K8 and the dashboard per https://github.com/kubernetes/dashboard#kubernetes-dashboard
I get the 'unauthorized" message when accessing https://
I am using a mozilla browser from a windows client, so the kubectl proxy approach doesn't seem appropriate. What I am missing here?
Same here running 1.4 installed it following the guide at http://kubernetes.io/docs/getting-started-guides/kubeadm/.
However when installing the dashboard/UI it seems to be running but get an "Unauthorized"
Followed the guide at http://kubernetes.io/docs/user-guide/ui/ to install the UI basically just run "kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml"
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-kube-master.net.loc 1/1 Running 1 19m
kube-system kube-apiserver-kube-master.net.loc 1/1 Running 1 20m
kube-system kube-controller-manager-kube-master.net.loc 1/1 Running 1 20m
kube-system kube-discovery-982812725-s79yq 1/1 Running 1 20m
kube-system kube-dns-2247936740-b9a2o 3/3 Running 3 20m
kube-system kube-proxy-amd64-4p9vg 1/1 Running 1 20m
kube-system kube-proxy-amd64-cbbrm 1/1 Running 0 20m
kube-system kube-proxy-amd64-ela05 1/1 Running 0 20m
kube-system kube-scheduler-kube-master.net.loc 1/1 Running 1 19m
kube-system kubernetes-dashboard-1655269645-arfpw 1/1 Running 0 15m
kube-system weave-net-cvcgd 2/2 Running 0 19m
kube-system weave-net-hxkwf 2/2 Running 2 19m
kube-system weave-net-pwuto 2/2 Running 0 19m
@natejoebott are you running 1.4?
Right, I too installed it via the beta version of Kubeadm with 1.4. Upon further investigation this may be expected behavior based upon the limitation number 4 - There is not yet an easy way to generate a kubeconfig file which can be used to authenticate to the cluster remotely with kubectl.
Are people expecting the dashboard to be publicly available without auth? If this were really the case, everyone would be exposing write access for their cluster to the anonymous world. Unless I'm missing a detail here?
No, but there doesn't seem to be simple way to auth. In the documentation for the dashboard, it provides the follow guidance after installation:
And then navigate to https://
If it asks password, use $ kubectl config view to find it.
I was not prompted for a password, nor does the kubectl config view provide any indication of a password. Next up is using the alternative proxy method; however, only localhost can be used - http://localhost:8001/ui - thus unless you are using a mac or linux host where the kubectl proxy script was invoked will this work.
Presumably you're using client-cert auth then if there's no password/token in kubeconfig. You can configure your browser to send the relevant client certificate, but it's usually not-straightforward in my experience.
kubectl proxy is certainly easier. There is a build of kubectl for Windows and if you want, you can run it with flags that enable it to listen on all interfaces and for connections from all hosts so that you can run it on a server machine and allow anyone to hit it (this is obviously insecure).
(I'm working on an example of how you can run a reverse-proxy with auth in front of the dashboard that should make things like this easier.)
(I'm working on an example of how you can run a reverse-proxy with auth in front of the dashboard that should make things like this easier.)
Can you share any details here? I'd love us to incorporate something like this to default install, so that folks can expose the UI to external world.
This is the idea: https://github.com/kubernetes/contrib/issues/1492. I still haven't had time to put the oauth2_proxy configuration together. Doesn't really solve for out-of-the-box though, as oauth2_proxy will require a configmap/secret with oauth2 secrets to work the way I'm imagining.
All right. Share anything you make work :) We need to explore all possible solutions, because, eventually, we need to bake a solution to this into Dashboard.
Heya, so we are all actively commenting on an issue that is closed since May .. begs the question as to whether this issue is Closed? (Maybe a scope change...)
Here's where we are at, total newbs:
1) Install Kubernetes with fancy new kubeadm tool: http://kubernetes.io/docs/getting-started-guides/kubeadm/
2) Per above "Explore other add-ons" ... http://kubernetes.io/docs/admin/addons/
3) "Dashboard is a dashboard web interface for Kubernetes." -- wow this sounds useful for us newbs!
4) https://github.com/kubernetes/dashboard#kubernetes-dashboard
5) kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
6) https://
7)
root@kub-test0:~# kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
8) Find this issue and:
root@kub-test0:~# wget --no-check-certificate https://localhost/ui
--2016-10-04 15:28:58-- https://localhost/ui
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:443... connected.
WARNING: cannot verify localhost's certificate, issued by ‘CN=kubernetes’:
Unable to locally verify the issuer's authority.
WARNING: no certificate subject alternative name matches
requested host name ‘localhost’.
HTTP request sent, awaiting response... 401 Unauthorized
Username/Password Authentication Failed.
So since this is my first time on a test cluster I can do this:
kubectl proxy --address 0.0.0.0
Nope: <h3>Unauthorized</h3>
Maybe helpful would be a pointer on how to add a username/password in the Usage section at https://github.com/kubernetes/dashboard#kubernetes-dashboard
What URL did you try to hit after running kubectl proxy?
From my workstation, to external IP of kubernetes cluster Master:
http://10.10.1.188:8001/ --> <h3>Unauthorized</h3>
From localhost on kubernetes cluster Master:
https://localhost/ui --> 401 Unauthorized Username/Password Authentication Failed.
And what happens if you try, after running kubectl proxy of course:
Workstation -> http://10.10.1.188:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=_all -> Unauthorized
Kubenetes Master wget http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=_all I get an index.html that informs me that I am using an outdated browser.
@dannyman I got the same problem. So, i try using NodePort and it's work. I can use dashboard right now on the browser.
My env.
kubectl describe services kubernetes-dashboard --namespace=kube-system
Use the NodePort work for me on Linux
@anutech2001 @bbalzola Can you contribute updates to our installation guide or troubleshooting section? I'd love us to have less issues when installing on home-made clusters.
Edit: even easier, as @bbalzola points out two comments above, you can bypass all the weird hinky "security" "Unauthorized" restrictions by hitting the NodePort directy:
kubectl describe services kubernetes-dashboard --namespace=kube-system
You can stop reading about the days of toil we have spent figuring out how to get to the web admin ... now, we have different problems in that we have exposed our web admin interface to all comers, and really the challenge is we have to figure out secure access to NodePorts.
Question: is direct, unauthenticated access to the app via the NodePort a best practice in Kubenetes apps or is this feature actually a bug?
Yes I have two work-arounds figured out:
kubectl proxyOur verdict is that the web admin appears to be very nice not only for interpreting cluster state, but for deploying services, &c. Thus, we should try to support it for our cluster users, so we have a research spike to figure out Kubenetes cluster user authentication and access control. Unfortunately, that is a big hot nasty mess: http://kubernetes.io/docs/admin/authentication/
There is a bit of a chicken and egg problem here, in that web admin is a user-friendly tool for new users, but you can not really use it unless you are clever enough to configure one of the trickier, murkier parts of Kubernetes. If you are new, then you probably want to "get kubernetes running on your workstation and copy over the certificates used on the cluster" then you can get the proxy back to the cluster working. (I have not done that part but I reckon there's a good link that covers copying the certificates over in a basic setup, and that the web admin docs could offer that link to help people past this issue.)
Question: is direct, unauthenticated access to the app via the NodePort a best practice in Kubenetes apps or is this feature actually a bug?
This is more like a known limitation. We want to solve this problem but so far haven't had resources to do so. @colemickens Had done some research on putting an oauth proxy next to the UI.
I am sorry, I got lost here.
Is there any way or any workaround to work with the ui dashboard?
root@kubserver1:~# _kubectl proxy_
Starting to serve on 127.0.0.1:8001
root@kubserver1:~$ _curl http://localhost:8001/_
Unauthorized
I see that people here complain about this issue since April. If your documentation is wrong then why don't you update it?
@groyee quoting from above:
kubectl describe services kubernetes-dashboard --namespace=kube-system
Then just connect to the NodePort that is given in the output and it works. (This also means that everyone can potentially use your dashboard if they happen to find the right port, because it doesn't do auth.)
Doesn't work:

I tried:
@groyee Ok. Option 2 definitely works for me, using kube 1.4, installed using kubeadm. Being far from an expert on k8s, my wisdom here stops at: Check, in some way, that both the cluster as well as the UI services are completely up and running? "curl just waits for response" sounds like you're on the right track, but the service listening isn't operational or something.
@groyee please check the wiki of my ansible/terraform deployment https://github.com/mcapuccini/KubeNow/
I find quite convenient to use ssh tunnelling.
It looks like the tutorial is only for OpenStack. I am using Azure.
How did you bring it up on Azure? If you used azkube or kubernetes-anywhere then I'd recommend using the kubeconfig along with kubectl proxy. Should just work.
@groyee we will add GCE, and AWS soon, but not Azure I am afraid (but if you'd like to contribute I'll be happy to merge PRs, it's quite simple to make a terraform module). Said that, you can just give a look to this ansible playbooks, and run the commands manually in your deployment:
On master:
https://github.com/mcapuccini/KubeNow/blob/master/playbooks/roles/proxy-daemon/tasks/main.yml
On your local machine:
https://github.com/mcapuccini/KubeNow/blob/master/playbooks/ui-tunnels-add.yml
then you just browse: http://localhost:8001/ui
Thank you guys.
I uninstalled 1.4.1, installed a clean 1.4 and after that the NodePort works for me.
Life is not so easy with Kubernetes :-)
newb here. I followed the comments and instructions, and I was able to get my kubernetes dashboard to show up..yay...thank you all!!
I installed the monitoring and logging via http://kubernetes.io/docs/user-guide/monitoring/ and specifically grafana and influxdb (https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md#metrics). I can't seem to get this to pull up on the NODEPORT no matter what combinations of url manipulations I try. I finally installed a desktopo gui on my linux server and I was able see grafana but only via "localhost". Any way to have grafana show up on the real ip address so it is accessible without having to log into the MASTER? THANKS ALL!
In my case, messing around with the getting started kubernetes-on-vagrant-single.
I went the ssh port forwarding way. Here is the gist to automate it.
https://gist.github.com/iamsortiz/9b802caf7d37f678e1be18a232c3cc08
#!/bin/bash
# Usage: Assuming a vagrant based kubernetes (as in https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html), run this script in the same folder of the Vagrantfile (where you would normally do "vagrant up")
# * Then insert the password (by default: kubernetes)
# * Browse localhost:9090
USERNAME='kubernetes'
PASSWORD='kubernetes'
function main() {
Create_user_on_kubernetes_machine
SSH_port_forwarding
# Enjoy (at localhost:9090)
}
function Create_user_on_kubernetes_machine() {
vagrant ssh -c "if [ ! -d /home/$USERNAME ]; then sudo useradd $USERNAME -m -s /bin/bash && echo '$USERNAME:$PASSWORD' | sudo chpasswd; fi"
}
function SSH_port_forwarding() {
KUBERNETES_HOST=$(kubectl cluster-info | head -n 1 | grep -o -E '([0-9]+\.){3}[0-9]+')
TARGET=$(kubectl describe services kubernetes-dashboard --namespace=kube-system | grep Endpoints | awk '{ print $2 }')
ssh -L 9090:$TARGET $USERNAME@$KUBERNETES_HOST
}
main


Same trouble here. I'm going to try the solutions described. I was using kubeadm to set up the cluster like the other users reporting the issue.
Is anyone able to tell me what exactly the issue is here? I realize there's probably alreadyagreement on this but I'm just not too sure what it is yet. I will try using caddy to RP the dashboard.
It didn't work for me as well.
What @seeekr suggested worked for me.
Same trouble here. I'm going to try the solutions described. I was using kubeadm to set up the cluster like the other users reporting the issue.
It seems that kubeadm tool and its networking are doing something wrong. Can you report an issue there and link to this one?
If you want to try out the UI, use minikube. It works and has dashboard out-of-the-box.
Having similar problem on a cluster set up by kubeadm:
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-1655269645-ltgvv 0/1 CrashLoopBackOff 13 47m
$ kubectl logs kubernetes-dashboard-1655269645-ltgvv -n kube-system
Starting HTTP server on port 9090
Creating API server client for https://10.0.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.0.0.1:443/version: dial tcp 10.0.0.1:443: getsockopt: no route to host
$ curl https://10.0.0.1:443/version
curl: (60) Peer's Certificate issuer is not recognized.
$ curl -k https://10.0.0.1:443/version
Unauthorized
I suspect kubeadm is not configuring any unauthenticated kube-apiserver service endpoints 😕
$ kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Selector: <none>
Type: ClusterIP
IP: 10.0.0.1
Port: https 443/TCP
Endpoints: 172.18.0.134:6443
Session Affinity: ClientIP
I have not yet tried kubeadm myself. Maybe your problem is described in the shooting guide: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
Could you please check?
Checking. Thanks!
+1. Using NodePort results in a 404 not found for me.
My update is that I've tried what I know to try and what the docs suggest trying without avail. I don't think it's broken exactly, because I can see the dashboard running there. That said I cannot connetct to it.
Did the location of the CA Certs move? Are they now directly in /etc/kuberenetes? I can't find CA Certs in /etc/kubernetes/certs where I recall reading they'd be.
@faddat the output of the setup usually dumps;
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
I must say this is quite a dead end for me. http://<ip>:<NodePort>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=_all fails to load on my browser and instead tries to download files that fail to decompress.
Same as everyone.
@Hirayuki Were you using kubeadm? We're now looking at ways to fix it.
To everyone who has problems running Dashboard in a kubeadm-based cluster: kubeadm is still in early stages of development, so it has some problems with DNS, networking and thus Dashboard UI. If all you want is to try our Dashboard, use https://github.com/kubernetes/minikube :)
I tried kubeadm. Everything works as expected. Not sure if everything is obvious for a beginner, but it works.
Are you still stuck? Or could you give more details so we can improve documentation/code?
One comment: It seems most have a running environment, but cannot connect to dashboard from outside. The reason is that the api-server is configured only to allow access with client certificate, but not with username & password - so browser cannot access. If you want to access dashboard from outside you must create a proxy with kubectl
kubectl --kubeconfig ./admin.conf proxy
and then access with browser localhost:8001/ui. (Or change configuration)
I tried merely all the ways that raised on Google(nodeport etc..), doesn't work.
I tried like this kubectl --kubeconfig ./admin.conf proxy but unfortunately, it failed again.
cannot connect from external web broswer.(Timeout)


Could anyone share please an example of admin.conf?
admin.conf is just under the /etc/kubernetes/
Hmm, i tried kubectl --kubeconfig ./admin.conf proxy but i got unathhorized anyway. Does anyone know the way to fix this problem?
P.S. I tried to run proxy on server with master node
It sounds like many of you are starting the proxy and then not actually using it...
Start the proxy and then go to http://127.0.0.1:8001/ui in your browser.
@Dm3Ch You should be running the proxy from the computer where you're trying to browse the dashboard.
@colemickens
I tried to run the proxy on the kube master server and login its localhost on browser.
but I got this problem after redirecting..

Your dashboard is probably not healthy then, if there are no endpoints listed for the dashboard.
(It also means you got past the proxy/auth problem though, so might file a new Issue if you're unable to resolve it)
@colemickens Thanks for your comments.
Is there any workaround to visit the dashboard from outside.
I don't understand the question. You are accessing it from outside. If you want to make it publicly available, then you will need to deploy a load balancer for it... the same way you would any other Service in your cluster. But you probably don't want to do that without putting a reverse proxy with authentication in front of it. Again, all out of scope for this Issue though.
@colemickens Sorry, I mean is it possible to visit the master server's dashboard from other server?
I have the master running on a public IP but get unauthorized un https://ip/ui

I tried merley all of the ways raised by Github and stackover but still failed to login from broswer outside.
Just able to login the localhost from the master servers' browser(but failed at the endpoints)
Here comes my pods status not sure if anything wrong.
[root@cloud kubernetes]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
etcd-cloud 1/1 Running 0 16h
kube-apiserver-cloud 1/1 Running 0 16h
kube-controller-manager-cloud 1/1 Running 0 16h
kube-discovery-982812725-z4b0v 1/1 Running 0 16h
kube-dns-2247936740-j6pq1 0/3 ContainerCreating 0 16h
kube-proxy-amd64-d2nr2 1/1 Running 0 16h
kube-scheduler-cloud 1/1 Running 0 16h
kubernetes-dashboard-1655269645-km0gq 0/1 Pending 0 15h
Like I said, external access for the dashboard is unrelated to this Issue, should be done the same as you'd expose any Kubernetes service, and should only be done with extreme caution or with a reverse proxy in front doing authentication.
Also, your dashboard not running is unrelated to this Issue. Please start a new one. You can tag me in it and I will try to help you find out why the dashboard is not running (it's definitely not running, you can see it right there in the output you pasted...).
@colemickens
I just raised another issue and it is appreciated if you may help to have a look
https://github.com/kubernetes/dashboard/issues/1382
I'll re-deploy on my systems with the latest kubeadm and let you know the
situation.
Jacob Gadikian
E-mail: [email protected]
SKYPE: faddat
Phone/SMS: +84 167 789 6421
On Sun, Oct 30, 2016 at 9:32 AM, Eric Yin Z [email protected]
wrote:
@colemickens https://github.com/colemickens
I just raised another issue and it is appreciated if you may help to have
a look
https://github.com/kubernetes/dashboard/issues/1382 http://url—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/692#issuecomment-257128246,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGz6iXYbTHmsoESW5Q6UyFLGYLc69Etlks5q5AG8gaJpZM4ISa8k
.
Try to access via the NodePort and it is ok for v1.4.4
$ kubectl describe services kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.109.227.200
Port: <unset> 80/TCP
NodePort: <unset> 32619/TCP
Endpoints: 10.44.0.3:9090
Session Affinity: None
browse from another host to url: http://k8s-master.local:32619
@ensalty It seems most responses want you to access the dashboard from the same machine that is the proxy. You want to access it from outside. There are two solutions.
$ kubectl proxy --address 192.168.1.50 --port=9090 --accept-hosts='^*$'The key here is that by default the proxy only accepts hosts that have localhost or 127.0.0.1 in it. The regular expression above takes anything - so you can get in this way too.
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='^*$' works for me.
Would it be too "crazy" to suggest that the dashboard be exposed by
default?
On Dec 3, 2016 12:55 PM, "Huqiu Liao" notifications@github.com wrote:
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' works for me.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/dashboard/issues/692#issuecomment-264619451,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGz6idMuc1fnS_piEEeqw3HUFOD6_n6Xks5rEQRsgaJpZM4ISa8k
.
No, not crazy but perhaps a bit insecure ;-)
Maybe for test/dev deployments a password prompt by default would be better.
I don't think this is insecure at all. The dashboard is already off by default. If we're turning it on, leave it up to us, the installers, to worry about security via authentication, firewall restrictions, etc. Having access controls in place limiting it to local host when it's typically installed on a master server with no head/GUI makes the default behaviour almost entirely useless. It's almost never going to be used that way so why bother making that default? You're just making one additional hoop we have to go through because you are assuming that we (the community) don't know how to secure our own cluster.
How is it possibly a good or even okay idea to default to publicly exposing a completely unauthenticated, writable access to a cluster?
What is the "extra hoop" you're even referring to?
The extra hoop is that we have to add additional parameters to make the dashboard usable. By default it's not accessible from anywhere but localhost, which is useless in any deployment environment given that we're deploying to SERVERS where no head/GUI exists.
The dashboard is off by default. I am not advocating turn it on with no authentication. I am saying that having the binding to 127.0.0.1 only as default behaviour when it's already off by default is overkill. This is analogous to a prompt that says "Are you sure?" followed by another saying "Are you really sure?"
As was stated above, a default username/password (or perhaps prompting for one for first time setup) is far more useful than binding to localhost for a server environment.
The dashboard doesn't bind to 127.0.0.1... do you understand what kubectl proxy does?
kubectl proxy is meant to be run locally from the box where you want to see the dashboard, so it defaulting to binding locally makes plenty of sense.
Defaulting kubectl proxy to binding to all interfaces and accepting requests from all hosts would expose your entire API server, with no auth, to the entire Internet.
If you want it to be exposed with a LoadBalancer and some sort of dashboard-specific auth in front, that's a different beast with its own problems (privilege escalation because regular users can take actions via the service account and whatever permissions it has)
I would rather have authentication in place by default with configuration on the master server as config files (controlled via SSH) and then lock down access to the dashboard via firewall rules.
You realize that one has to take action to start the server regardless, correct? You cannot protect people from shooting themselves in the foot. I understand the approach, but it is restrictive by simply assuming everyone can't manage their own security.
What I'm finding difficult is kubeadm created the master endpoint using the private IP of my server and now I cannot access that server remotely using admin.conf as I don't have a VPN connection into my AWS VPC. So how am I supposed to remotely run kubectl proxy against my cluster?
You realize that one has to take action to start the server regardless, correct?
I don't know what you're referring to, no.
What I'm finding difficult is kubeadm created the master endpoint using the private IP of my server and now I cannot access that server remotely using admin.conf as I don't have a VPN connection into my AWS VPC. So how am I supposed to remotely run kubectl proxy against my cluster?
That sounds like a kubeadm bug/issue. I hardly think opening Dashboard to the world, publicly available, insecure by default, and running with Service Account credentials is a very poor way of addressing a nearly unrelated issue.
One has to go in and start the dashboard (that's what I meant by server). It's not on by default. It's not like I'm asking for it to be started with the Kubernetes install.
There are better ways to handle this than making it a pain in the arse to access remotely. And I'm not asking for the other end of the spectrum as you keep describing. You're putting words in my mouth. If that's all you're going to continue to do, please stop replying to this thread.
One has to go in and start the dashboard (that's what I meant by server). It's not on by default. It's not like I'm asking for it to be started with the Kubernetes install.
That's not true of most bring-ups, including kubeadm as soon as the addon story is figured out.
There are better ways to handle this than making it a pain in the arse to access remotely. And I'm not asking for the other end of the spectrum as you keep describing. You're putting words in my mouth. If that's all you're going to continue to do, please stop replying to this thread.
You asked for the server to bind to something other than 127.0.0.1. Since Dashboard doesn't bind exclusively to 127.0.0.1, I assumed you were asking for kubectl proxy to accept requests from all hosts on all interfaces by default. I'm sorry if I misinterpreted, my goal is certainly not to put words in anyone's mouth. That having been said, I think it's extremely unfair to act as if this was some intentional act whose only purpose was to be a "pain". The dashboard doesn't have auth today, so it would be irresponsible to ship it publicly exposed.
I don't think anyone here has expressed opposition to the idea of supporting/shipping auth in front of the dashboard, but that's almost completely unrelated to how kubectl proxy works, has it's own warts and probably ought to be discussed in a new Issue.
Same issue in my 1.4.5 env.
/ # curl -v --tlsv1.2 --cert /srv/kubernetes/server.cert --key /srv/kubernetes/server.key --cacert /srv/kubernetes/ca.crt https://10.0.0.1/api/v1/services
* Trying 10.0.0.1...
* TCP_NODELAY set
* Connected to 10.0.0.1 (10.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /srv/kubernetes/ca.crt
CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=kubernetes-master
* start date: Nov 9 04:24:15 2016 GMT
* expire date: Nov 7 04:24:15 2026 GMT
* subjectAltName: host "10.0.0.1" matched cert's IP address!
* issuer: CN=<my external ip>@xxxxxxxxxx
* SSL certificate verify ok.
> GET /api/v1/services HTTP/1.1
> Host: 10.0.0.1
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Mon, 12 Dec 2016 05:46:39 GMT
< Content-Length: 13
<
Unauthorized
* Curl_http_done: called premature == 0
* Connection #0 to host 10.0.0.1 left intact
/ #
My APIServer is configured as below
--bind-address 0.0.0.0 \
--service-cluster-ip-range=10.0.0.0/24 \
--kubelet-certificate-authority /srv/kubernetes/ca.crt \
--kubelet-client-certificate /srv/kubernetes/server.cert \
--kubelet-client-key /srv/kubernetes/server.key \
--kubelet-https \
--tls-cert-file=/srv/kubernetes/server.cert \
--tls-private-key-file=/srv/kubernetes/server.key \
--secure-port=443 \
--token-auth-file=/srv/kubernetes/token.csv
One of the main confusion regarding the kubectl proxy in the guide was not to run this on the master node. Instead to run it in my local machine. This do not require you to have insecure master node while be able to access the dashboard without any issue.
Steps to get it working
kubectl proxy in local development machine.http://localhost:8001/ui/ (Note the /ui/ in the end)I had the same problem ... resolved with add a "/ui" in final ... 127.0.0.1:8001/ui
Here is my way to resolve the problem.
Got hints from comments by @natejoebott on Oct 4, 2016, I realised the problem is some limits in kubernetes dashboard ( only localhost can be used ). So i fixed the access with ssh tunnel.
I needn't install kubectl on my local PC.
Run below command to build a ssh tunnel from your local PC or Using Putty on windows as well with similar setting. 10.0.0.1 is the host you need replace with yours which run kubectl proxy
ssh [email protected] -L 8001:127.0.0.1:8001 -N
Now you are fine to access via http://localhsot:8001/ui or http://localhost:8001/api/v1 from your local PC. Unauthorized is gone.
Thanks @bbalzola .
Use ipaddress:NodePort(192.168.3.48:30342) access the dashaboard.
root@k8s-master:/home/ubuntu/k8s_install_images# kubectl describe services kubernetes-dashboard --namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.101.141.25
Port: <unset> 80/TCP
NodePort: <unset> 30342/TCP
Endpoints: 10.244.0.7:9090
Session Affinity: None
No events.
@bbalzola using the NodePort works, however it is http. Is there a way to enable a self-signed certificate to use https when connecting to the dashboard?
@naisanza I think it's possible but I haven't tested yet sorry
The solution https://github.com/kubernetes/dashboard/issues/692#issuecomment-260547456 posted above worked for me. Thanks @waynebrantley!
I installed k8s using kubeadm in https://kubernetes.io/docs/getting-started-guides/kubeadm/. I wanted to run kubectl proxy on the master, and access the k8s API from a different host (not the master) using the IP address (not localhost) of the master.
172.18.7.245 is the IP address of my master, and after doing kubectl proxy --port=8181 --address=172.18.7.245 --accept-hosts='^*$' & on the master, I was able to successfully access the k8s API from a different host (with IP address 172.18.7.246 and that pings the master 172.18.7.245) using
curl http://172.18.7.245:8181/api/v1
curl http://172.18.7.245:8181/api/v1/services
curl http://172.18.7.245:8181/api/v1/pods
curl http://172.18.7.245:8181/api/v1/secrets
I was also able to access the k8s API from another machine that can SSH into 172.18.7.246 after setting up SSH port-forwaring ssh -L 8080:172.18.7.245:8181 172.18.7.245 on the machine and using the following commands.
curl http://localhost:8080/api/v1
curl http://localhost:8080/api/v1/services
curl http://localhost:8080/api/v1/pods
curl http://localhost:8080/api/v1/secrets
@Klae why do you expect it to work? You're submitting zero authentication, I wouldn't expect that to work at all unless you're running the cluster very insecurely.
@Klae as documented numerously in this thread:
$ kubectl proxy
# open in your browser: http://localhost:8001/ui
The same machine you're running kubectl proxy on yes. It opens an already-authenticated proxy to the cluster.
The only way you're going to be able to use the Web UI without the proxy is if you get your browser to send a token, or use a client cert, depending on how your apiserver is setup. Using the proxy is much, much, much easier!
something seems weird here...
like others, I want to use (mac) laptop and (chrome) browser to hit the (great) dashboard gui.
If I port forward from mac over ssh
ssh -A [MASTER-HOSTNAME] -L 8001:127.0.0.1:8001
the browser loads gui but 403s all data.
however, I'm 100% free to "raid" secrets from cmd-line/browser:
curl http://localhost:8001/api/v1/secrets
It would seem, if we (the clients/users/admins of k8 cluster) firewall off the proxy port from outside world, but port forward over ssh, that allowing fully GUI dashboard access would be pretty reasonable, security thinking-wise?
That only makes sense with RBAC disabled. I'm not sure why the dashboard 403s when not authenticating even though is not required.
If RBAC is enabled (which it is with e.g. kubeadm 1.6), you'll need a way to authenticate as a client, and that is what kubectl proxy does nicely. In that case an SSH tunnel doesn't really make sense.
I have go through this very long thread and yet haven't found a solution for my case. In my scenario, I have a kube cluster with 3 nodes, and they're deployed on VMs that have no monitors attached, neither I want to use X11 to forward it. This is what I need:
But now I did this kubectl config set-credentials userA --username=userA --password=pwd, and I got Unauthorized when hitting https://{IP}/ui. Any solution? I'm not that familiar with the other ways like openssl, but tried by following the guide and still no luck.
thanks @praseodym for the info.
do you happen to have any pointers or places too look for more info/help on that?
i just spent 3 days in a black hole trying to sort out kubectl v1.6.1 "client" issue compared to working v1.6.0 on sun/mon, bleah! (my ops team has an over-aggressive firewall so made sorting out issues a super challenge..)
(i can get my admin.conf on my laptop and kubectl proxy with it and seems like is connecting -- but same kind of "instantly hides access to the details" (same denied stuff as with trying to port forward a proxy from the server). so i'm probably close!)
The solution #692 (comment) posted above worked for me. Thanks @waynebrantley!
I installed k8s using kubeadm in https://kubernetes.io/docs/getting-started-guides/kubeadm/. I wanted to run kubectl proxy on the master, and access the k8s API from a different host (not the master) using the IP address (not localhost) of the master.
172.18.7.245 is the IP address of my master, and after doing kubectl proxy --port=8181 --address=172.18.7.245 --accept-hosts='^*$' & on the master, I was able to successfully access the k8s API from a different host (with IP address 172.18.7.246 and that pings the master 172.18.7.245) using
curl http://172.18.7.245:8181/api/v1
curl http://172.18.7.245:8181/api/v1/services
curl http://172.18.7.245:8181/api/v1/pods
curl http://172.18.7.245:8181/api/v1/secrets
I was also able to access the k8s API from another machine that can SSH into 172.18.7.246 after setting up SSH port-forwaring ssh -L 8080:172.18.7.245:8181 172.18.7.245 on the machine and using the following commands.
curl http://localhost:8080/api/v1
curl http://localhost:8080/api/v1/services
curl http://localhost:8080/api/v1/pods
curl http://localhost:8080/api/v1/secrets
this works for my case
Finally both below works for me cause you need to ensure master can ping pod from worker node, I found that somehow --iface not to be loaded to start the kube-flannel pod as I am using vagrant. use kubectl replace -f kube-flannel.yml --force thus to delete and recreate the source.
~~~
My issues, installed a few times and spent two days but no clue.
I want to access cluster from outside host.
1) with kubectl proxy --address PubIP --port=9090 --accept-hosts='^*$'
Almost can access any from outside browser, but not for /ui/, all others like /api/ no problem
2) with NodePort not working for me.
I did see the service and NodePort for dashboard, but PubIP:NodePort not working for me, just show ERR_EMPTY_RESPONSE after a long waiting
@zhuroy Thank you so much for the free time you gave me, I have tried it and it works:
kubectl proxy --address xxx.xx.xx.xx --port=2087 --accept-hosts='^*$'
I access with the browser http://xxx.xx.xx.xx:2087/ui automatically leads to
http://xxx.xx.xx.xx:2087/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/workload?namespace=default
Closing as stale.
EASY ANSWER
You can get around this by not forwarding the host header... in Apache Virtualhost Config:
ProxyPreserveHost Off
I've not tried to configure Apache (or Nginx) web server. Instead, the following works for me
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
kubectl proxy
running above command from windows worked for me. you have to give admin.conf file using --kubeconfig=="admin.conf"
kubectl kubeconfig=="admin.conf" proxy
The "Unauthorized" error can be fixed using the disable-filter flag:
$ kubectl proxy --disable-filter=true --address=0.0.0.0
@colemickens the documentation doesn't say to run kubectl proxy from your machine you're physically on. I kept thinking it meant to run it from master. So that would mean you would need to install all of kubernetes packages on your client machine to use Dashboard via kubectl proxy?
And that would also mean you need a copy of /etc/kubernetes/admin.conf on to your local client as well?
From docs:
kubectl proxy creates proxy server between your machine and Kubernetes API server. By default it is only accessible locally (from the machine that started it).
@naisanza it is not stated because it can be run from any machine and we do not want to suggest anything, and not to make user think that some machine is the only one he can run it from.
Only requirements are kubectl and valid kubeconfig file.
PS. Master node does not require kubectl in any way. It's user's choice to install it there.
@naisanza The reason I mention it is:
Many users in this thread seem to want to access the dashboard from machines that are not part of the cluster.
If you run kubectl proxy on the master, you either have to:
a) tunnel traffic to the master through that proxy
b) open the proxy to accept traffic from any host and put it on a publicly accessible port
The 2b option is suggested throughout this thread but that means that anyone in the world could then start writing to your cluster. The more secure option is to run kubectl proxy from the same node you're running the browser on.
I think there are additional options now that kube-dashboard offers some alternative authentication options, but that was my reason for recommendation in this thread.
hi @miguelcastilho , kubectl proxy --disable-filter=true --address=0.0.0.0 works for me, thanks a lot
I was able to get this working. Hence, sharing it for anyone interested.
BACKGROUND: K8s master on ubuntu VM. Couldn't access dashboard from my local machine (Windows 10).
Solution:
Taking pointers from @seeekr and @groyee, I installed kubectl on my machine and configured it to map to the cluster using https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/.
Note: I had to WinSCP and copy over the cert files from ~/kube/certs/ on the ubuntu VM.
Thereafter,
kubectl config set-cluster <custom-cluster-name> --server=<https://{ip-of-ubuntu-master}> --certificate-authority=<path/to/apiserver.pem>kubectl config set-credentials <custom-user-name> --client-certificate=<path/to/ca.pem> --client-key=<path/to/ca-key.pem>kubectl config set-context <custom-context-name> --cluster=<custom-cluster-name-from-above> --user=<custom-user-name-from-above>kubectl config use-context <custom-context-name>kubectl proxyhttp://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxyEnjoy!!!
I try following way to get this working.
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
port-forward also works:
sudo kubectl --namespace kube-system port-forward svc/kubernetes-dashboard 443
then access https://localhost
@soolaugust got me working with kubectl proxy --address 0.0.0.0 --accept-hosts '.*' as well.
There really should be something more obvious like --accept-all-hosts.
Great!
Hello,
I am a newbie on Kubernetes. I am running Kubernetes and minikube on Ubuntu server 16.04
I ran kubectl proxy and I got this
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
E0907 13:25:13.009215 1817 proxy_server.go:144] Error while proxying request: dial tcp 192.168.99.100:8443: i/o timeout
E0907 13:25:43.287623 1817 proxy_server.go:144] Error while proxying request: dial tcp 192.168.99.100:8443: i/o timeout
E0907 13:27:57.516225 1817 proxy_server.go:144] Error while proxying request: dial tcp 192.168.99.100:8443: i/o timeout
E0907 13:28:27.599589 1817 proxy_server.go:144] Error while proxying request: dial tcp 192.168.99.100:8443: i/o timeout
I don't understand that error.
Here is my ip addresses
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:26:6c:28:ce:5f brd ff:ff:ff:ff:ff:ff
3: wlp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 24:ec:99:48:97:8b brd ff:ff:ff:ff:ff:ff
inet 192.168.100.19/24 brd 192.168.100.255 scope global dynamic wlp8s0
valid_lft 170760sec preferred_lft 170760sec
inet6 fe80::bbd3:3733:b634:a9a0/64 scope link
valid_lft forever preferred_lft forever
4: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.1/24 brd 192.168.5.255 scope global vmnet1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fec0:1/64 scope link
valid_lft forever preferred_lft forever
5: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global vmnet8
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fec0:8/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:37:fb:dc:65 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:37ff:fefb:dc65/64 scope link
valid_lft forever preferred_lft forever
10: vboxnet1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0a:00:27:00:00:01 brd ff:ff:ff:ff:ff:ff
661: br-b03b4b67d036: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e6:30:0f:1f brd ff:ff:ff:ff:ff:ff
inet 172.28.0.1/16 brd 172.28.255.255 scope global br-b03b4b67d036
valid_lft forever preferred_lft forever
662: br-54dc24dbbd29: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:af:a2:bb:cb brd ff:ff:ff:ff:ff:ff
inet 172.29.0.1/16 brd 172.29.255.255 scope global br-54dc24dbbd29
valid_lft forever preferred_lft forever
668: vboxnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.1/24 brd 192.168.56.255 scope global vboxnet0
valid_lft forever preferred_lft forever
inet6 fe80::800:27ff:fe00:0/64 scope link
valid_lft forever preferred_lft forever
$ minikube start
😄 minikube v1.13.0 on Ubuntu 16.04
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube" ...
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.12 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" by default
How can I solve it?
@dannyman I got the same problem. So, i try using NodePort and it's work. I can use dashboard right now on the browser.
My env.
- windows 7, vagrant + centos 7, kubernetes 1.4
hi ,
Everytime I login k8s dashboard , it ask the token . How did you fix it ?
Most helpful comment
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='^*$'works for me.