What steps did you take and what happened:
clusterctl ignores later kubeconfig files defined in the KUBECONFIG environment variable. Seems to be be part of the changes in 0.3.4 to enable specifying the kubeconfig context.
What did you expect to happen:
Should parse all the files then start looking for the context it wants. You can "fix" it by manually ensuring the the config file with the mgmt cluster is the first file in the list.
Environment:
kubectl version): 1.18.2 client / 1.17.4 mgmt cluster/etc/os-release): Ubuntu 19.10 client / 18.04 k8s clusters/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
Apologies if this is an intentional decision to function differently from how kubectl handles multiple files in the KUBECONFIG variable, I did review what I thought were the relevant PRs but they did not mention this specifically, they were more focused on API breakages and I could not parse how that would affect end users using clusterctl directly.
/area clusterctl
/priority important-soon
/milestone v0.3.6
/cc @wfernandes
@fabriziopandini: You must be a member of the kubernetes-sigs/cluster-api-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Cluster API Maintainers and have them propose you as an additional delegate for this responsibility.
In response to this:
/area clusterctl
/priority important-soon
/milestone v0.3.6/cc @wfernandes
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/cc @vuil @Anuj2512
Thanks for this issue. We can take a look at how kubectl chooses the context with its --kubeconfig and --context flags.
TBH I wasn't aware that KUBECONFIG allowed for multiple files to be specified. I assumed that it would behave the same way as the flag, which seems like it allows for a single file path.
# From `kubectl options`
--kubeconfig='': Path to the kubeconfig file to use for CLI requests.
Just to confirm your use case. Is this your current use?
# Assuming Linux/Mac
export KUBECONFIG="/path/to/kubeconfig1:/path/to/kubeconfig2"
clusterctl init --kubeconfig-context=some-context-in-kubeconfig2
After quick inspection, this is happening because we currently do this: https://github.com/kubernetes-sigs/cluster-api/blob/a799265e225b8f0e47b4642c8211a9213d0aea9a/cmd/clusterctl/client/cluster/proxy.go#L219
And GetDefaultFileName returns the first valid/existing kubeconfig in KUBECONFIG.
/milestone v0.3.x
/assign
Going to take a look at how kubectl does this but I'm open to suggestions on what we want the clusterctl UX to be.
So I've found out that in the following scenario kubectl works as intended.
export KUBECONFIG=/no-config-exists-here:/config-here-but-no-context:/real-config-with-myContext
# This outputs
kubectl get pods -A --context=myContext -v9
I0506 14:17:10.097105 51327 loader.go:375] Config loaded from file: /config-here-but-no-context
I0506 14:17:10.099011 51327 loader.go:375] Config loaded from file: /real-config-with-myContext
...
I'll work on this to get clusterctl to work as kubectl.
/lifecycle active
Just to confirm your use case. Is this your current use?
# Assuming Linux/Mac export KUBECONFIG="/path/to/kubeconfig1:/path/to/kubeconfig2" clusterctl init --kubeconfig-context=some-context-in-kubeconfig2
Yes, that's more or less my use case.
Technically, I'm doing this trickery:
#Kubeconfig loading
export KUBECONFIG=$(echo $(find ~/.kube -type f -name config.\*.yaml) | sed 's/[[:space:]]/:/g')
Very handy when dealing with multiple ephemeral clusters (like one tends to do when using cluster-api). You just drop the config.clustername.yaml file into the .kube directory and you're good to go.