Cluster-api: Provide a better experience for adding kubeconfig

Created on 5 Mar 2020  Â·  16Comments  Â·  Source: kubernetes-sigs/cluster-api

User Story

As an end user I am connecting to the management cluster to create a new workload cluster and then accessing that cluster through kubectl.

Detailed Description

I need to get the KUBECONFIG for the new workload cluster.

I currently do that through one of two ways:

$ kubectl get secrets workload-kubeconfig -o jsonpath={.data.value} | base64 -d > ~/.kube/config.workload 
$ KUBECONFIG=~/.kube/config.workload:~/.kube/config kubectl config view --flatten > ~/.kube/config.new 
$ mv ~/.kube/config.new ~/.kube/config

Alternatively,

$ kubectl get secret my-first-cluster-kubeconfig -o=jsonpath='{.data.value}' | base64 -D > my-first-cluster.kubeconfig
$ kubectl apply --kubeconfig=my-first-cluster.kubeconfig ...

The first case allows me to use other tools like kubectx to switch between clusters, the second way requirements me to always add --kubeconfig or change the KUBECONFIG env variable.

I would like an easier way to add this to my kubectl kubeconfig, either merging it into an existing kubeconfig or optionally putting it somewhere else as output.

/kind feature

areclusterctl kinfeature lifecyclactive

Most helpful comment

So just wanted to clarify the UX here.

The command clusterctl get kubeconfig <workload-cluster-name> is going to print out the kubeconfig to stdout.
I'm guessing the user can then > that into a kubeconfig file and use kubectl --kubeconfig=<workload-cluster-kubeconfig-file-path> to then access the cluster or update the KUBECONFIG environment variable.

All 16 comments

/milestone Next

@vincepri Will it make sense if we can output the kubeconfig in a file as soon as the workload cluster is created. It will be a lot easier for the end-user to access.

/assign

/lifecycle active

Will it make sense if we can output the kubeconfig in a file as soon as the workload cluster is created. It will be a lot easier for the end-user to access.

Creating a workload cluster is an async operation triggered by kubectl apply, not by clusterctl. So I don't see this a viable option

@prankul88 if I can give a suggestion, don’t underestimate the UX around the two options - create a separated kubeconfig file vs merging into the current one -, because in my experience this is controversial and users like the commands to adapt to their habits.
Might be discussing on some examples of the target UX might help to avoid iterations on the PR.

@fabriziopandini Thanks for the input.

Might be discussing on some examples of the target UX might help to avoid iterations on the PR.

I would be posting on the channel to discuss this.

/area clusterctl

/milestone v0.3.x

i think this is a nice feature to make things easier for users, it would be cool if it could be used as a shell substitution too eg:

export KUBECONFIG=$(clusterctl getkubeconfig <cluster name>)

or

kubectl --kubeconfig=$(clusterctl getkubeconfig <cluster name>) ...

obviously this is just a made up example, but i hope it's clear. it also implies that each clusters kubeconfig would get written _somewhere_. maybe this functionality could be a flagged option?

@elmiko I was thinking of something like clusterctl get kubeconfig <workload cluster name> so that each cluster's config can be identified.

How about something like

# When out is specified, we'll not write to the user's kubeconfig and set a context, but rather spit it out into a file.
# If out is `-`, we should use stdout to print the kubeconfig.
$ clusterctl get kubeconfig --name=<cluster-name> [--out=<path>]

Kind, when creating a new cluster it sets the context to kind-<cluster-name>, we might be able to do something similar like capi-<namespace>-<cluster-name>?

Do we need to worry about maintaining consistency with the way that we use stdout for clusterctl config cluster?

That said, I do like the idea of being able to insert config into a specified (or default) kubeconfig and potentially having the option to modify the current context for that config (if it exists)

Do we need to worry about maintaining consistency with the way that we use stdout for clusterctl config cluster?

Yeah that crossed my mind, they do cover different purposes, 🤷 — what do you think?

@detiber I was more inclined towards the way we use stdout in config cluster to maintain consistency. But happy with what @vincepri suggested too

Let's go with that for now, we can revisit in v1alpha4.

So just wanted to clarify the UX here.

The command clusterctl get kubeconfig <workload-cluster-name> is going to print out the kubeconfig to stdout.
I'm guessing the user can then > that into a kubeconfig file and use kubectl --kubeconfig=<workload-cluster-kubeconfig-file-path> to then access the cluster or update the KUBECONFIG environment variable.

/milestone v0.3.9

Was this page helpful?
0 / 5 - 0 ratings