Kubeadm: Clusters built with kubeadm don't support basic auth

Created on 22 Nov 2016  路  35Comments  路  Source: kubernetes/kubeadm

_From @kelseyhightower on October 25, 2016 18:15_

Some people are having trouble authenticating to the Kubernetes dashboard when the cluster is built with kubeadm. Seems like the deployed api server does not support basic auth, which makes it impossible to hit the Dashboard UI without using kubectl proxy.

_Copied from original issue: kubernetes/kubernetes#35536_

aresecurity help wanted prioritbacklog

Most helpful comment

dex v2 is ready to go and I am sure @ericchiang and @rithujohn191 would be excited to support this happening. They can't hack directly on this at the moment as they are super busy with Dex itself but the entire system is designed to run trivially as a deployment on top of Kubernetes and this is what we do for Tectonic.

All 35 comments

_From @Hirayuki on October 25, 2016 18:17_

if needed I can reproduce the issue and give outputs tomorrow

i have this issue while deploying k8s with official guide at:
http://kubernetes.io/docs/getting-started-guides/kubeadm/

and installing the dashboard through:
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

@kubernetes/sig-cluster-lifecycle

_From @jbeda on October 25, 2016 19:9_

Opinions on what the experience should be here?

We could create an admin user and output the password as part of kubeadm join.

But, to be honest, basic auth is a bit of a hack as to change those accounts you need to edit a file and restart the apiserver. This'll also be a problem as we move to HA.

Options:

  • Improve password auth so that accounts are stored in etcd and can be managed.
  • Enhance kubeadm to set up other authenticators. This'll be a more complicated setup process, for sure.
  • Move to selfhosted and put the password file in a secret. Make the api server reload as necessary. Create tools to edit that file.

@kubernetes/sig-auth

_From @kelseyhightower on October 25, 2016 19:11_

@jbeda It could be as simple as updating the docs and highlight that you must use kubectl proxy to "authenticate" to the Kubernetes Dashboard and other add-ons.

_From @roberthbailey on October 25, 2016 20:42_

But, to be honest, basic auth is a bit of a hack as to change those accounts you need to edit a file and restart the apiserver. This'll also be a problem as we move to HA.

This is the same problem as with the bearer token auth. They were both built without taking HA requirements into account.

Until we have another solution that works for a web browser (oauth anyone?), having basic auth is an important part of the user experience for folks that want to access services through the apiserver proxy (e.g. the dashboard).

I think the best option would be to replace basic auth with something better, but in absence of that improving it so that passwords are stored in etcd rather than in a csv file seems like it wouldn't be too difficult and wouldn't add extra complexity to kubeadm.

_From @jbeda on October 25, 2016 21:58_

I like the idea of documenting the kubectl proxy method. That is no code changes and provides immediate relief.

I'd say we need to document the following:

  • How to copy/extract the admin.conf kubeconfig from the master VM
  • How to import/merge that in to your ~/.kube directory.
  • How to rewrite the target for the cluster if there is an inner/outer IP issue
  • How to use the proxy command to connect to the cluster to get the dashboard

Anyone want to take this on?

_From @broady on November 20, 2016 22:10_

This page also needs updating:
http://kubernetes.io/docs/user-guide/ui/#dashboard-access

Improve password auth so that accounts are stored in etcd and can be managed.

I don't think we should store usernames and passwords in etcd for any authenticator.

@deads2k

Why not? This is a deeper issue, but, at the end of the day, if you have direct unfettered access to etcd you own the cluster. This only gets harder with HA.

Choices:

  • Keep passwords on disk and force users to sync between API servers in HA.

    • [optional] Provide for re-reading file at timed interval or on signal

    • [optional] make file read/write with an API for updating. Find some way to sync across HA systems.

    • If we don't provide a way to add/delete/change/expire passwords then we have a security risk.

  • Keep passwords in distributed etcd

    • [optional] provide toolset for adding/deleting/expiring passwords

    • [optional] Perhaps reuse secrets in kube-system namespace? This is similar to what we are doing with bootstrap tokens.

  • Deprecate basic auth

    • What do we replace it with? Some sort of oauthid flow would probably be best but the set up story there isn't great as it adds steps for getting started

IMO, storing passwords in etcd seems reasonable.

Why not? This is a deeper issue, but, at the end of the day, if you have direct unfettered access to etcd you own the cluster. This only gets harder with HA.

Being a password managing product moves us into the territory of an authentication provider and that's not the kind of product that I want to be. I'd prefer to use an authentication provider to get identity information. I don't think that kube's mission should expand to providing authentication and identity management solutions as part of administering container engines.

Making it easy to use an existing authentication provider: definitely. Maybe even choosing one as a "simple" default: possibly. Providing it as an in-tree option backed in the same persistent storage as our container orchestration: I don't think this is reasonable.

I'm really torn here. I totally agree that we don't want to be a user management service. This is something that should be external. But we've already cross the bridge of bundling a CA in to the product for similar reasons. Why a CA and not some simple username/password that we encourage users to outgrow and migrate away from?

But -- what I'm hearing is that you'd like to deprecate basic auth. What do you think we should tell users that are looking to "kick the tires"?

But -- what I'm hearing is that you'd like to deprecate basic auth. What do you think we should tell users that are looking to "kick the tires"?

I'd like to see an integration with an oauth server instead. Given a working genericapiserver library (pulls open for 1.6), agnostic kubectl (CRUD complete in 1.5), and a working single cluster API federator (POC here: https://github.com/openshift/kube-aggregator), I think we can integrate an oauth server which is kubectl compatible and provides room to grow without having to reconfigure your cluster. The oauth server could use external IdPs and a separate datastore to allow an "htpasswd" (or even "anypassword") kick the tires integration.

If you want a private "kick the tires" installation today, the --insecure-allow-any-token gives a near zero friction way to start with tokens and you can transition to something like OIDC later.

Our take with kubeadm so far is that there is no clean line between "kick the tires" and "production usage". As such, we want to make sure that things are reasonable from a security PoV out the gate. With that, the --insecure-allow-any-token isn't the way to go.

I'd love to have a default option here that is usable out the gate for kubeadm. But using a public identity provider is going to be complicated -- most will require users to register to get a client id and secret. And you usually need to register the callback that is reachable from the web. Dex v2 looks to offer a bare bones solution that can grow. It can also store secrets in third party resources. I haven't used it yet though.

I'd love to have a default option here that is usable out the gate for kubeadm. But using a public identity provider is going to be complicated -- most will require users to register to get a client id and secret. And you usually need to register the callback that is reachable from the web. Dex v2 looks to offer a bare bones solution that can grow. It can also store secrets in third party resources. I haven't used it yet though.

We are not far off from being able to carve something like the OpenShift user and group management, which includes an oauth server with external IdP integration along with support for "htpasswd" bootstrapping out of the main OpenShift repo and provide a containerized authentication provider that was designed to integrate with the kube authentication APIs.

If you're looking for a direction that provides both easy bootstrapping (oc cluster up or openshift start as examples) and production ready external IdP integration, we could start looking at it in earnest.

Well -- it is either we go in this direction or I push to put the basic auth stuff in etcd :)

dex v2 is ready to go and I am sure @ericchiang and @rithujohn191 would be excited to support this happening. They can't hack directly on this at the moment as they are super busy with Dex itself but the entire system is designed to run trivially as a deployment on top of Kubernetes and this is what we do for Tectonic.

I think having a couple of exploration to go down here makes sense. Dex is decoupled from Kube, the OpenShift OAuth server is partially decoupled but also has some integrations I think will be important in the long run. Being able to credibly integrate auth doesn't necessarily mean locking into one solution.

Couple of "paths of exploration". Also there is Keystone, Keycloak, and IAM integrations to consider.

One really useful point is ensuring some authentication and authorization can be local to a cluster (to isolate failure domains), while still being federated meaningfully.

Can we have a minimalist IAM component that uses etcd as storage and integrates with external IAM through Oauth 2.0 (OIDC, UMA profile, etc.)? This would mean that we can both satisfy small/dev and big/production clusters.

@smarterclayton I don't think anyone is talking about lock-in here, Dex is just built on top of the bog standard OIDC integration in API server and kubectl that went into 1.3.

@pires Dex backs by Kubernetes TPRs and we could add an etcd backend but I feel like using TPRs is more correct.

Can I assign this to someone?

I'll take this one. Having a longer term solution here is necessary and I'm happy to coordinate.

Any news about that point? thank you

While I think that this is important long term, this isn't a priority right now as we try to drive kubeadm to Beta and GA. If someone wants to do the work we won't say no, but I don't think that anyone is actively working on it this second.

@nancykyo If you try to contact the apiserver using https://<master-ip>:6443/ui, does it work?
The basic auth file is for the apiserver only, not specifically the dashboard

@luxas
https://<master-ip>:6443/ui
use this url worked, but should I close the NodePort visit?

I'd like to enable basic auth. I tried editing /etc/kubernetes/manifests/kube-apiserver.json to add the flag but then I'm running into #25. @nancykyo got it working by adding the flag in kubeadm source. Is there another way to enable the flag on a cluster already set up by kubeadm?

I see this issue too with kubernetes 1.5.4 and kubernetes-dashboard image version gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0.

I installed kubeadm referring https://kubernetes.io/docs/getting-started-guides/kubeadm/, and then installed kubernetes-dashboard by doing

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.0/src/deploy/kubernetes-dashboard.yaml

I see the kubernetes-dashboard in CrashLoopBackOff status and the k8s_kubernetes-dashboard.* container on the worker is in Exited state.

Below are the errors. Has anyone successfully installed kubernetes-dashboard on kubeadm?

# kubectl --namespace=kube-system get all
NAME                                                          READY     STATUS             RESTARTS   AGE
po/calico-policy-controller-mqsmh                             1/1       Running            0          4h
po/canal-etcd-tm2rv                                           1/1       Running            0          4h
po/canal-node-3nv2t                                           3/3       Running            0          4h
po/canal-node-5fckh                                           3/3       Running            1          4h
po/canal-node-6zgq8                                           3/3       Running            0          4h
po/canal-node-rtjl8                                           3/3       Running            0          4h
po/dummy-2088944543-09w8n                                     1/1       Running            0          4h
po/etcd-vhosakot-kolla-kube1.localdomain                      1/1       Running            0          4h
po/kube-apiserver-vhosakot-kolla-kube1.localdomain            1/1       Running            2          4h
po/kube-controller-manager-vhosakot-kolla-kube1.localdomain   1/1       Running            0          4h
po/kube-discovery-1769846148-pftx5                            1/1       Running            0          4h
po/kube-dns-2924299975-9m2cp                                  4/4       Running            0          4h
po/kube-proxy-0ndsb                                           1/1       Running            0          4h
po/kube-proxy-h7qrd                                           1/1       Running            1          4h
po/kube-proxy-k6168                                           1/1       Running            0          4h
po/kube-proxy-lhn0k                                           1/1       Running            0          4h
po/kube-scheduler-vhosakot-kolla-kube1.localdomain            1/1       Running            0          4h
po/kubernetes-dashboard-3203962772-mw26t                      0/1       CrashLoopBackOff   11         41m
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/canal-etcd             10.96.232.136    <none>        6666/TCP        4h
svc/kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   4h
svc/kubernetes-dashboard   10.100.254.77    <nodes>       80:30085/TCP    41m
NAME                   DESIRED   SUCCESSFUL   AGE
jobs/configure-canal   1         1            4h
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-discovery         1         1         1            1           4h
deploy/kube-dns               1         1         1            1           4h
deploy/kubernetes-dashboard   1         1         1            0           41m
NAME                                 DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller          1         1         1         4h
rs/dummy-2088944543                  1         1         1         4h
rs/kube-discovery-1769846148         1         1         1         4h
rs/kube-dns-2924299975               1         1         1         4h
rs/kubernetes-dashboard-3203962772   1         1         0         41m

# kubectl --namespace=kube-system describe pod kubernetes-dashboard-3203962772-mw26t
  20m    5s    89    {kubelet vhosakot-kolla-kube2.localdomain}                        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203962772-mw26t_kube-system(67b0d69b-0b47-11e7-8c97-7a2ed4192438)"

# kubectl --namespace=kube-system logs kubernetes-dashboard-3203962772-mw26t
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

# docker ps -a | grep -i dash
3c33cf43d5e4        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0   "/dashboard --port=90"   54 seconds ago      Exited (1) 22 seconds ago                       k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4

# docker logs k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

@vhosakot It looks like that was likely due to no rbac roles for the dashboard. Looks to be updated now. Try https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml.

@jbeda do you plan to work on this in time for v1.8? What do you envision us doing?

I consider browser-based basic auth against the API a non-goal, due to the security implications (cannot log out, exposes the API to CSRF attacks, etc.).

I'd either recommend using kubectl proxy to proxy a specific subpath and accessing http://localhost:8001/..., or surfacing the app via ingress (and securing the app per-user)

@liggitt Totally agree with you. I asked since I don't know if there really is something we should do here. Of course it is possible to tweak the API Server's args to include a basic auth file for it, but I wouldn't recommend it either.

@kelseyhightower did you have something in mind for this when opening the issue? I think the Ingress route is the best one for sure.

@jbeda @luxas So is this just a doc update, as defined here? If so, I can take this on. Integrating an auth solution into k8s seems a bit orthogonal here, but we should probably mention how folks get started down that path.

@jamiehannaford That's partially documented (we have docs on how to proxy), but feel free to go and improve things...

Did anyone get the Dashboard working with basic authentication ? The basic auth just doesnt come up

Was this page helpful?
0 / 5 - 0 ratings

Related issues

kvaps picture kvaps  路  3Comments

jessfraz picture jessfraz  路  3Comments

bruceauyeung picture bruceauyeung  路  4Comments

helphi picture helphi  路  3Comments

atoato88 picture atoato88  路  4Comments