When creating multiple clusters with kind create cluster --name <name> the kubeconfig for each cluster specifies the same user name of kubernetes-admin. This becomes a problem when trying to use multiple kubeconfigs in KUBECONFIG because the user name overlaps with each of the configs. For example:
$ export KUBECONFIG="$(kind get kubeconfig-path):$(kind get kubeconfig-path --name 2)"
$ kubectl --context=kubernetes-admin@kind-2 get all --all-namespaces
error: the server doesn't have a resource type "all"
Instead each auth user name should include the --name of the kind cluster when specified in the kind create cluster --name <name> command.
/assign
/kind bug
/priority important-soon
this seems to be https://github.com/kubernetes/kubeadm/issues/416, looking into options...
I asked in #sig-cluster-lifecycle, so far I don't think there's an answer for this. Following up...
@neolit123 @fabriziopandini WDYT?
Afaict we'd have to provision our own user and matching kubeconfig.
I've hit this problem previously while using multiple k8s clusters. Turns out one can use the same username in all your clusters. What needs to be tweaked in the kubeconfig file is changing the name field of the user to something unique and add the username field under users.user to be kubernetes-admin.
I'll try my hand at writing a PR for it.
Ah, I see that it is kubeadm that generates the kubeconfig file and may not allow what kind needs for this. I'll have a look at kubeadm some more. But modifying the kubeconfig file after it is generated would be an option.
Ok, I've got things working. Here is what the new kubeconfig file looks like for a cluster named mycluster in case people want to manually make the changes for themselves:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://localhost:58039
name: mycluster
contexts:
- context:
cluster: mycluster
user: kubernetes-admin-mycluster
name: kubernetes-admin@mycluster
current-context: kubernetes-admin@mycluster
kind: Config
preferences: {}
users:
- name: kubernetes-admin-mycluster
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
username: kubernetes-admin
I just need to get official approval from my employer to do the PR. Should be a couple of days.
It seems that when I add username with accounts that use a token I get a multiple auth error:
eg:
users:
- name: foo
user:
token: bar
username: baz
But when I add username to users it seems to work _until I change my context_ then it is removed from my kubeconfig.
eg:
users:
- name: foo
user:
token: bar
username: baz
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
This pops up from time to time in the channel
see: https://github.com/kubernetes-sigs/kind/issues/850, proposing to solve this along with other changes
fixing in #850. kind clusters will have unique entries. have this part implemented pretty cleanly.
/lifecycle active
/close
fixed by #1029
thanks for the great feedback
@aojea: Closing this issue.
In response to this:
/close
fixed by #1029
thanks for the great feedback
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Ok, I've got things working. Here is what the new kubeconfig file looks like for a cluster named
myclusterin case people want to manually make the changes for themselves:I just need to get official approval from my employer to do the PR. Should be a couple of days.