I'm in the process of wrapping Kops inside of another tool that exposes a REST API and does some other work in AWS via Terraform. One of the ideas of this tool is that an Ops Engineer spins it up somewhere as a server and then developers in their org can hit it with the simple HTTP API to get a running Kubernetes cluster as well as a well organized VPC that lets them provision stuff like RDS databases painlessly. The tool also handles stuff like automatic cleanup of the cluster after an
Right now, Kops is designed for a single-user running locally. As such it continually updates the ~/.kube directory when doing stuff like update and export. I really don't want this behavior and instead would like to just be able to avoid it all together. The best solution I've come up with so far is to set $HOME to some alternative file system path and copy around AWS credentials as necessary. Is there a better solution? Is there a way to skip updating the ~/.kube directory and just allows Kops to dump what would normally be written to ~/.kube/config on a per-cluster basis to a file of my choosing?
In general looking for guidance / advice / caution around attempting to do what I am doing. I scanned the docs and --help for various commands and didn't see anyway to redirect kubecfg updates to alternative locations.
Correction:
kubectl natively uses a variable (if set) an environmental variable called $KUBECONFIG which points at your relevent config file..
I've personally found explaining the "context switching" of kubectl to be a step to far for my team (new to k8s).... so we export them to individual files and move on with our lives :smile:
syntax is a little like this
# where the cluster is called "foo.example.com"
export KUBECONFIG=~/.kube/foo.example.com
kops export kubecfg --name foo.example.com --config=~$KUBECONFIG
@starkers That's not really the problem. I agree kubectl with multiple configs is kind of a PITA. Let me see if I can describe the situation better:
I have a server that exposes a HTTP API and hides a bunch of AWS and kops complexity behind a couple of URL endpoints.
One of those endpoints is GET /cluster/:name/config
When you run kops export kubeconfig --name=${CLUSTER_NAME} then kops goes off and updates the kubeconfig on the API server. This is merged kubeconfig. What I want to do is actually be given the unmerged kubeconfig for that ${CLUSTER_NAME} that can then be returned via the API call.
My solution so far has been to create a temporary directory for each clusters kops executions and then set $HOME to that directory. Kops then ends up creating isolated kubeconfigs per cluster in a file that I can write back to the API consumer.
Ideally Kops would add two options:
kubeconfig for ${CLUSTER_NAME} to a file of my choosingkubeconfig for ${CLUSTER_NAME} to stdout. ok, you can still get the unmerged file this way:
kops export --name=$CLUSTER_NAME --config=filename.yml
@starkers You sure about that?
plombardi@palwork ~/datawire> kops export kubecfg --name default.k736.net --state=<REDACTED> --config=/home/plombardi/datawire/default.yaml
Kops has set your kubectl context to default.k736.net
plombardi@palwork ~/datawire> kops version
Version 1.5.3 (git-46364f6)
plombardi@palwork ~/datawire> cat /home/plombardi/datawire/default.yaml
cat: /home/plombardi/datawire/default.yaml: No such file or directory
Pretty sure, but yes.. it looks like the .yml stuff breaks things possibly?
I haven't looked at the --state flag so I don't know what that does either..
here's a screenshot of me running it on kops 1.5.1, awssudo data-stg was just me retrieving AWS credentials for the account, you can skip that.
HTH

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
According to https://github.com/kubernetes/kops/issues/2378#issuecomment-296488846 there's no need to specify to --config as it "is actually not for kubeconfig". So it looks like changing the env var does the trick:
ᐅ KUBECONFIG=~/.kube/dev.k8s.local kops export kubecfg dev.k8s.local
kops has set your kubectl context to dev.k8s.local
ᐅ cat ~/.kube/dev.k8s.local
...
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
/close
Most helpful comment
Correction:
kubectl natively uses a variable (if set) an environmental variable called
$KUBECONFIGwhich points at your relevent config file..I've personally found explaining the "context switching" of kubectl to be a step to far for my team (new to k8s).... so we export them to individual files and move on with our lives :smile:
syntax is a little like this