Hello,
I'm trying to test the dashboard but i get the following errors and i can't access to the UI
2016/02/14 15:46:15 Getting list of all replication controllers in the cluster
2016/02/14 15:46:15 the server has asked for the client to provide credentials (get replicationControllers)
2016/02/14 15:46:15 Outcoming response to 62.210.220.xx:56978 with 500 status code
Could you please guide me to solve this issue ?
Regards,
Smana
How did you start the UI and Kubernetes cluster? What is your Kubernetes version?
Hi @bryk
I've just run kubectl create -f src/deploy/kubernetes-dashboard.yaml
I'm running kubernetes version 1.1.4
Thank you,
Looks like a problem with credentials. Is your apiserver protected by some security mechanisms?
cc @floreks @maciaszczykm @cheld Have you ever seen this problem?
The apiserver, just listens on https with basic authentication.
Hello guys i've got the same problem here.
I use a Azure Kubernetes cluster deployed with the getting-started guide (Azure, CoreOS, Kube, Weave).
I use the 1.1.7 Kubernetes Version.
I run the same command as @Smana : kubectl create -f src/deploy/kubernetes-dashboard.yaml
But Heapster need the CA.cert that should be inside the serviceaccount folder.
Problem, in the azure cloud config the CA.cert is not implemented in the kube-controller service.

So do you have some instructions to implement the CA.cert in the coreos cloud config ?
Or something else ?
Thanks
let me know if you need further info :)
@theobolo @Smana I've just pushed new testing image of the Dashboard UI. It includes certificates setup:
RUN apk --update add ca-certificates
RUN for cert in `ls -1 /etc/ssl/certs/*.crt | grep -v /etc/ssl/certs/ca-certificates.crt`; do cat "$cert" >> /etc/ssl/certs/ca-certificates.crt; done
Can you delete old Dashboard replication controller and recreate it? Please tell me whether this helps.
@bryk thank you, unfortunately i still get the same error:
2016/02/15 13:27:59 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 62.210.220.66:47908
2016/02/15 13:27:59 Getting list of all replication controllers in the cluster
2016/02/15 13:27:59 the server has asked for the client to provide credentials (get replicationControllers)
2016/02/15 13:27:59 Outcoming response to 62.210.220.xx:47908 with 500 status code
My apiserver is reachable with the following command, i don't know if that can help
curl --cacert /etc/kubernetes/ssl/ca.pem -u kube:xxxxxxxxxxx https://62.210.220.xx:8443
@Smana
Maybe our backend is connecting to master at different port where only basic authentication is available.
This looks like a similar issue:
https://github.com/kubernetes/kubernetes/issues/7622#issuecomment-98389696
@bryk what do you think? Is this even possible for in-cluster configuration?
Dashboard relies on serviceaccount for the authentication to the api server.
@bryk Even with the new image i've got an error
Get https://10.16.0.1:443/api/v1/replicationcontrollers: read tcp 172.17.0.3:45116->10.16.0.1:443: read: connection reset by peer
In my kubernetes cluster the Kube API is available at 172.18.0.12:8080 or 172.18.0.12:8443 but if i try a curl at
http://10.16.0.1:80 or https://10.16.0.1:443 nothing happens.
And i still have the ca.cert error :/
to be clear : i'm running on CoreOS and in my cloud config the root-ca-cert option is not defined and the ca.cert is not created during the deployement. That's should be the problem no ?
Hello, any news on this please ?
@luxas Can you help?
@Smana We've just done a new canary and versioned release. Client library was updated. Can you check once more on src/deploy/kubernetes-dashboard-canary.yaml?
@bryk unforntunately i'm still getting the same error
2016/02/23 16:20:52 Starting HTTP server on port 9090
2016/02/23 16:20:52 Creating API server client for https://10.233.0.1:443
2016/02/23 16:20:52 Creating in-cluster Heapster client
2016/02/23 16:23:10 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 62.210.220.66:58166
2016/02/23 16:23:10 Getting list of all replication controllers in the cluster
2016/02/23 16:23:10 the server has asked for the client to provide credentials (get replicationControllers)
2016/02/23 16:23:10 Outcoming response to 62.210.220.xx:58166 with 500 status code
Sure!
@theobolo Try this:
Append
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
to kube-apiserver.service
Append
--service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt
to kube-controller-manager.service
This will probably fix your issue. (BTW, I haven't used k8s on azure, but read the source now, and this will probably help)
The two issues are different, @theobolo麓s is that serviceAccounts aren't created for the default namespace for connecting to apiserver. The controller-manager code above fixes that. Then there's a ServiceAccountController too, and you have to enable that one also. It takes a normal pod, and injects the ca.crt and token files into /var/run/secrets/kubernetes.io/serviceaccount/
@Smana Are you running on bare-metal, some cloud provider or a custom config?
We have to know that to be able to help.
I'm running on a virtual machine, (OS聽fedora), this vm is not running on a cloud provider.
The only difference i can see with a "standart" installation is the tcp port for https.
In my case the apiserver's listenning on 8443.
Fedora 23
kubernetes 1.1.7
network plugin calico
deployed with http://kubespray.io
@luxas Let me know if you need further info
What does kubectl get secrets and kubectl get {some_pod} -o yaml output?
kubectl get secrets --all-namespaces
NAMESPACE NAME TYPE DATA AGE
ci default-token-zqvfu kubernetes.io/service-account-token 2 10d
default default-token-2zk2l kubernetes.io/service-account-token 2 11d
kube-system default-token-i5t58 kubernetes.io/service-account-token 2 11d
web default-token-2l89f kubernetes.io/service-account-token 2 6d
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard-canary-sr219",
"generateName": "kubernetes-dashboard-canary-",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-canary-sr219",
"uid": "6472ffee-da49-11e5-af9b-0cc47a0db68e",
"resourceVersion": "598820",
"creationTimestamp": "2016-02-23T16:20:45Z",
"labels": {
"app": "kubernetes-dashboard-canary",
"version": "canary"
},
"annotations": {
"kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"kube-system\",\"name\":\"kubernetes-dashboard-canary\",\"uid\":\"647173a1-da49-11e5-af9b-0cc47a0db68e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"598793\"}}\n"
}
},
"spec": {
"volumes": [
{
"name": "default-token-i5t58",
"secret": {
"secretName": "default-token-i5t58"
}
}
],
"containers": [
{
"name": "kubernetes-dashboard-canary",
"image": "gcr.io/google_containers/kubernetes-dashboard-amd64:canary",
"ports": [
{
"containerPort": 9090,
"protocol": "TCP"
}
],
"resources": {},
"volumeMounts": [
{
"name": "default-token-i5t58",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"livenessProbe": {
"httpGet": {
"path": "/",
"port": 9090,
"scheme": "HTTP"
},
"initialDelaySeconds": 30,
"timeoutSeconds": 30
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "node1"
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
}
],
"hostIP": "62.210.220.xx",
"podIP": "10.233.64.28",
"startTime": "2016-02-23T16:20:45Z",
"containerStatuses": [
{
"name": "kubernetes-dashboard-canary",
"state": {
"running": {
"startedAt": "2016-02-23T16:20:52Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "gcr.io/google_containers/kubernetes-dashboard-amd64:canary",
"imageID": "docker://e63249efb9e297f63187ab8534051391aefe3dcf1116be0c493b8bcdb5b419a5",
"containerID": "docker://c0d402a9e5b62e748bd1fb642817b43be9734f800e5f3c66b0c8d063a912677c"
}
]
}
}
Can you
kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT
kubernetes {CLUSTER_IP} not important {PORT}
kubectl exec -it po {some_other_pod_than_dashboard} /bin/bash
> curl -k {CLUSTER_IP}:{PORT}
> curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt {CLUSTER_IP}:{PORT}
@luxas That works as expected
kubectl exec -ti test-tiorn -- /bin/bash
root@test-tiorn:/# curl -k https://10.233.0.1
Unauthorized
root@test-tiorn:/# curl -k -u kube:changeme --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://10.233.0.1
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/resetMetrics",
"/swagger-ui/",
"/swaggerapi/",
"/ui/",
"/version"
]
I forgot to mention that i'm using the nodePort
@luxas Nice it worked ! So now i've the same problem as above.
I've an error :
2016/02/24 00:36:31 Starting HTTP server on port 9090
2016/02/24 00:36:31 Creating API server client for https://10.16.0.1:443
2016/02/24 00:36:31 Creating in-cluster Heapster client
2016/02/24 00:39:12 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 10.32.0.1:55580
2016/02/24 00:39:12 Getting list of all replication controllers in the cluster
2016/02/24 00:39:16 Get https://10.16.0.1:443/api/v1/replicationcontrollers: read tcp 172.17.0.5:46236->10.16.0.1:443: read: connection reset by peer
2016/02/24 00:39:16 Outcoming response to 10.32.0.1:55580 with 500 status code
With node port also.
Can you test without nodeport and access dashboard via
http://[master-ip]:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard?
http://62.210.220.xx:8080/api/v1/proxy/namespaces/kube-system/dashboard-canary
kubectl get svc --namespace=kube-system
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
dashboard-canary 10.233.48.248 nodes 80/TCP app=kubernetes-dashboard-canary 17h
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
kubectl get endpoints --namespace=kube-system
NAME ENDPOINTS AGE
dashboard-canary 10.233.64.28:9090 17h
strange... still digging
@luxas @Smana When you go to LoadBalancer and not NodePort the endpoint is :
/api/v1/proxy/namespaces/kube-system/services/dashboard-canary/
and not
/api/v1/proxy/namespaces/kube-system/dashboard-canary
as you wrote it Smana is missing the /services/.
But @luxas I still get the same error : connection reset by peer :/
my mistake, thank you @theobolo
I still get the same error 500
the server has asked for the client to provide credentials (get replicationControllers)
Hmm.... Can you maybe provide an instruction for how to mimic your setup locally so that we can debug it?
@bryk For me it's very easy.
I use the azure kubernetes cluster that is provided here.
https://github.com/kubernetes/kubernetes/tree/master/docs/getting-started-guides/coreos/azure
I can give to you guys the access to my Kube master via SSH.
And you will have the full environment to see what's going on, it's just a testing cluster.
BTW my kube master is reachable by my nodes on 172.18.0.12:8080 or 172.18.0.12:8443 with cert;
but i can't reach it by the Kubernetes service adress wich is 10.16.0.1
Is there an ENV variable that i can use in the replication controller to change the requesting URL ?
I want to target https://172.18.0.12:8443/ wich is working on my cluster from any nodes and any pods.
Is there an ENV variable that i can use in the replication controller to change the requesting URL ?
I want to target https://172.18.0.12:8443/ wich is working on my cluster from any nodes and any pods.
Use --apiserver-port argument for the container in your YAML file. This will fix the problem
I can give to you guys the access to my Kube master via SSH.
And you will have the full environment to see what's going on, it's just a testing cluster.
Can you set up a test cluster for a day or two so that I can connect to it with kubectl?
@bryk Alrighttt ! I've added that line : args: ["--apiserver-host", "http://172.18.0.12:8080"]
Now everything works in the Dashboard as i expected.
For the testing cluster @bryk i will do that in one hour,
I'll give to you a server.key and an adress to connect to the Kube master via SSH
How can i send you the credentials securly ?
Well, it seems to be located on my own box, i just deployed 2 clusters (fedora and debian) and everything's working like a charm.
I'll deploy my machine again but for my concern the ticket can be closed.
Letting it open for @theobolo issue but maybe the issue can be renamed.
Thank you all
@theobolo Send me an email.
I'm seeing the same error message with v1.0.0 (gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.0) on Kubernetes 1.1.8. The dashboard pod runs inside my cluster and discovers the API clusterIP just fine (10.200.0.1:443) but that's as far as it gets.
Request on https://<apiserver-public-ip>:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/
2016/03/16 18:05:58 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 10.15.164.13:43472
2016/03/16 18:05:58 Getting list of all replication controllers in the cluster
2016/03/16 18:05:58 the server has asked for the client to provide credentials (get replicationControllers)
2016/03/16 18:05:58 Outcoming response to 10.15.164.13:43472 with 500 status code
Attempt using a NodePort on http://<node-public-ip>:31763/
2016/03/16 18:30:03 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 10.100.72.1:41776
2016/03/16 18:30:03 Getting list of all replication controllers in the cluster
2016/03/16 18:30:03 the server has asked for the client to provide credentials (get replicationControllers)
2016/03/16 18:30:03 Outcoming response to 10.100.72.1:41776 with 500 status code
Another attempt, this time with --apiserver-host set to https://<apiserver-public-ip>:6443
2016/03/16 18:39:25 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 10.15.164.13:47374
2016/03/16 18:39:25 Getting list of all replication controllers in the cluster
2016/03/16 18:39:42 Get https://<apiserver-public-ip>:6443/api/v1/replicationcontrollers: x509: failed to load system roots and no roots provided
2016/03/16 18:39:42 Outcoming response to 10.15.164.13:46842 with 500 status code
Notice the different error message and the long delay before failing (~20sec)
edit1
I mounted /usr/share/ca-certificates (host) to /etc/ssl/certs (pod) and the error message became:
Get https://<apiserver-public-ip>:6443/api/v1/replicationcontrollers: x509: certificate signed by unknown authority`
edit2
The token used by the dashboard pod was invalid. I deleted the default-token to force its recreation, deleted the dashboard pod to pick up the changes, and it works without even setting --apiserver-host. Nice backend! :+1:
For me it was the same issue here's how i fixed it (like above):
First get the secrets from the right namespace
$ kubectl get secrets --namespace=kube-system
NAME TYPE DATA AGE
default-token-7ovhb kubernetes.io/service-account-token 3 7m
Delete the default-token from the kube-system namespace
$ kubectl delete secret default-token-7ovhb --namespace=kube-system
Delete the replication controller from the dashboard
$ kubectl delete rc kubernetes-dashboard-v0.1.0 --namespace=kube-system
Recreate the dashboard
$ kubectl create -f dashboard-controller.yaml
Enjoy!
Thanks for sharing this solution! We should add this to our documentation as this seems to be a common problem and solution is simple.
One should always check that the default tokens generated by Kubernetes are actually valid, especially after a new cluster installation or Kubernetes update.
One easy way to do so:
$ DEFAULT_TOKEN=$(kubectl --namespace=kube-system get serviceaccount default -o jsonpath="{.secrets[0].name}")
$ TOKEN_VALUE=$(kubectl --namespace=kube-system get secret "$DEFAULT_TOKEN" -o go-template="{{.data.token}}" | base64 -d)
$ curl -k -H "Authorization: Bearer $TOKEN_VALUE" https://my-api-server:6443/version
# should return API server version
I say we should add this to our installation guide. It seems that many people are encountering this
I run my cluster locally via docker run
i noticed when I reboot the machine and start the docker run again, the dashboard is recreated because it was already running (stored state in etcd) and the token needs to be recreated along with the dashboard
@antoineco, thanks for this snippet.
I can access using the api-server flag Using the insecure port, but it fails with secure port because of the certificates. How do I build a new docker image so I can add the certs to the dashboard pod?
It is not necessary to add your client certificate, if your controller-manager was properly configured for signing tokens (with --root-ca-file and
--service-account-private-key-file) your pod was already started with a token and CA cert which the dashboard will use.
If it doesn't work out of the box it means the token is invalid or absent, see my previous comment to check out.
@antoineco thanks man! I dont have the --service-account-private-key-file flag set, I think thats why kubectl --namespace=kube-system get serviceaccount default -o jsonpath="{.secrets[0].name}" returns empty. Gonna try this now
As soon as I add --admission-control=ServiceAccount I cannot create any more pods.
Neither kubectl run nginx --image=nginx nor kubectl create -f dashboard.yaml create any pods.
The default service account seem to be created
$ kubectl get serviceaccount
NAME SECRETS AGE
default 0 7m
Do I need to create a secret for the service account before it can run pods? Do I need any other step to make service accounts work??
Yeah kube-controller-manager is complaining about the missing token
Mar 25 17:22:08 master kube-controller-manager[1387]: I0325 17:22:08.418282 1387 event.go:211] Event(api.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"kubernetes-dashboard-canary", UID:"84d2be17-f2a5-11e5-b788-0401bd38da01", APIVersion:"v1", ResourceVersion:"72", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-canary-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Mar 25 17:22:22 master kube-controller-manager[1387]: W0325 17:22:22.930001 1387 request.go:344] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly.
i am provising my cluster on digitalocean with terraform and systemd. GUess I need to create the secrets somehow.
Try to delete your default-token-xxxx secret first.
kubectl get secrets does not return anything.
cesco@desktop: ~/code/go/src/bitbucket.org/cescoferraro/cluster/terraform on master [+!?]
$ kubectl get secrets
cesco@desktop: ~/code/go/src/bitbucket.org/cescoferraro/cluster/terraform on master [+!?]
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.100.0.1 <none> 443/TCP 18m
Have you tried using the recommended admission control plug-ins?
Thats what I am using. I am starting the api-server with this
ExecStart=/opt/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--logtostderr=true \
--insecure-bind-address=${MASTER_PRIVATE} \
--insecure-port=8080 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--runtime-config=api/v1 \
--allow-privileged=true \
--service-cluster-ip-range=10.100.0.0/16 \
--advertise-address=${MASTER_PUBLIC} \
--token-auth-file=/data/kubernetes/token.csv \
--etcd-cafile=/home/core/ssl/ca.pem \
--etcd-certfile=/home/core/ssl/etcd1.pem \
--etcd-keyfile=/home/core/ssl/etcd1-key.pem \
--etcd-servers=https://${MASTER_PRIVATE}:2379,https://${DATABASE_PRIVATE}:2379 \
--cert-dir=/home/core/ssl \
--client-ca-file=/home/core/ssl/ca.pem \
--tls-cert-file=/home/core/ssl/kubelet.pem \
--tls-private-key-file=/home/core/ssl/kubelet-key.pem \
--kubelet-certificate-authority=/home/core/ssl/ca.pem \
--kubelet-client-certificate=/home/core/ssl/kubelet.pem \
--kubelet-client-key=/home/core/ssl/kubelet-key.pem \
--kubelet-https=true
--tls-cert-file=/home/core/ssl/kubelet.pem
You're using the kubelet cert for you API server, but is it a Server certificate with the correct SAN for your server common name and IP?
If yes, with the following controller manager flags you should be good to go :
--root-ca-file=/home/core/ssl/ca.pem
--service-account-private-key-file=/home/core/ssl/kubelet-key.pem
Thats what I am doing, I am using the same self-signed certificate I created for etcd2 with this etcd recomended way and made sure to add all private and public ip to its configuration option. I am not sure about what you mean by server common names.
ExecStart=/opt/bin/kube-controller-manager \
--address=0.0.0.0 \
--master=https://${COREOS_PRIVATE_IPV4}:6443 \
--logtostderr=true \
--kubeconfig=/home/core/.kube/config \
--cluster-cidr=10.132.0.0/16 \
--register-retry-count 100 \
--root-ca-file=/home/core/ssl/ca.pem \
--service-account-private-key-file=/home/core/ssl/kubelet-key.pem
I was having trouble with my certificates. All solved now. Now I do not need to pass the apiserver-host flag, but it does not ask for my api simple authentication.Which it was what I was expecting.
If I explicitly provide the flag, the connection to the api fail with the no root error. Which is weird because all my pods got the /var/run/secrets/kubernetes.io mounted as expected
I'm closing this bug. Please continue discussion if needed.
@bryk I encountered the same issue as well in 1.2 , and based on the discussion above I delete the secrets related the namespace "kube-system" , but now I can not create the dashboard again . the error is as following . So do you know where I can find the guide to figure it out . thx
FailedCreate Error creating: Pod "kubernetes-dashboard-" is forbidden: no API token found for service account kube-system/default, retry after the token is automatically created and added to the service account
and based on the discussion above I delete the secrets related the namespace "kube-system" , but now I can not create the dashboard again
You need to recreate the secrets now (and possibly the service accounts). Refer to the documentation to learn how.
I read all comments, but still don't understand: is it possible to use ssl-keys auth for dashboard to access an apiserver?
Apiserver run with following:
- --secure-port=8443
- --insecure-bind-address=127.0.0.1
- --insecure-port=8080
- --admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota
- --runtime-config=extensions/v1beta1=true,extensions/v1beta1/thirdpartyresources=true
- --tls_cert_file=/etc/kubernetes/ssl/apiserver.pem
- --tls_private_key_file=/etc/kubernetes/ssl/apiserver-key.pem
- --client_ca_file=/etc/kubernetes/ssl/ca.pem
- --service_account_key_file=/etc/kubernetes/ssl/apiserver-key.pem
Other components like controller-manager use key/cert auth:
- --kubeconfig=/etc/kubernetes/kubeconfig.yaml
In kubeconfig there are following:
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://10.83.8.197:8443
name: bots
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/node.pem
client-key: /etc/kubernetes/ssl/node-key.pem
...
Is there any way to achieve this for dashboard?
You can use kubeconfig files with 1.1.0-beta2 version or 1.1 (to be released in 2 weeks). All you need to do is to specify KUBECONFIG env var and point it to the file.
@bryk thank you!
Works like a charm.
@Bregor We actually should have a command line option for this. Can you check whether --kubeconfig option works? If not, it is super easy to add it.
@bryk
$ docker run -it --rm gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0-beta3 --kubeconfig=/blabla
unknown flag: --kubeconfig
Usage of /dashboard:
...
@Bregor Docker run ? Not Kubectl run either ?
@theobolo is there any difference? Kubelet will use this very container anyway.
Yep but not sure about that :/ @bryk can you confirm ?
Should be no difference. You should use kubelet/docker directly only for testing. In real environment deploy this as a pod to your cluster.
Exactly.
In current manifest (I use kind: Deployment) there is following:
...
spec:
containers:
- name: kubernetes-dashboard
command:
- /dashboard
- --apiserver-host=https://kubernetes.default.svc.kubernetes.local:8443
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0-beta3
imagePullPolicy: Always
...
env:
- name: KUBECONFIG
value: "/etc/kubernetes/kubeconfig.yaml"
...
Most helpful comment
For me it was the same issue here's how i fixed it (like above):
First get the secrets from the right namespace
Delete the default-token from the kube-system namespace
$ kubectl delete secret default-token-7ovhb --namespace=kube-systemDelete the replication controller from the dashboard
$ kubectl delete rc kubernetes-dashboard-v0.1.0 --namespace=kube-systemRecreate the dashboard
$ kubectl create -f dashboard-controller.yamlEnjoy!