I just found myself did a build via googleCloudBuild but I forgot to switch my active kubectl context to the GKE cluster.
I think picking up the active kubectl context is not a great idea especially because users can be switch back and forth very frequently.
It appears like the deploy.kubectl does not currently let me specify a context name. If it did, I would do something like:
profiles:
- name: local
deploy:
kubectl:
context: docker-for-desktop <---- new field
manifests:
- ./k8s-specifications/*
profiles:
- name: remote
deploy:
kubectl:
context: my-gke-cluster <---- new field
manifests:
- ./k8s-specifications/*
then run skaffold dev -p {local,remote}.
If unspecified, this would still pick up the active context name from kubectl (backwards compatible).
(Skaffold v0.5.0)
I think we could support context via a flag to the command. Adding the context to the yaml file itself would be tricky, since we want these yaml files to be portable between developers and projects. Even if two devs have contexts pointing to the same cluster/namespace, the names will likely be different.
I was not aware of those requirements. For my use case, it's simply that I keep deploying to the wrong context (because I wasn't using kube-ps1 etc to see what's the "current"), any way to mitigate that would probably help me.
What if the context could be set at a directory level with a file that wasn't checked in? Like echo "my-gke-cluster" > .kubecontext?
Something like that or a standard env var would definitely work.
This will be probably implemented as a contextual config parameter. Global config is something we are working on actively: #886
@nkubala @dgageot @balopat WDYT of adding a profile section to each context in the global config?
If a user passes in a profile which is mapped to a context in the global config, skaffold will set it as the current context before proceeding.
@priyawadhwa I definitely think using the global config is the right approach here, but skaffold supports activating multiple profiles at once so that wouldn't work here.
One thing we could do is allow users to "name" their contexts, e.g. skaffold config set name test_context (since every config entry is paired with a specific kubectl context). Then we could support a CLI flag for users to specify that context by name (typing test_context is a lot easier than typing gke_test-project_us-west1-a_test-cluster) and skaffold would set that context, and also as a result use all of the global config values associated with it.
@nkubala I recommend not inventing an indirection to kubectl context names. kubectl has a command to rename the context, people who need to type things can use that. Similarly, for GKE clusters, we assume people script things out, so a simple env var like KUBE_CONTEXT can solve it.
I am facing this limitation as well not being able to work on two separate projects residing on different minikubes.
Both an environment variable and a cli flag would be ideal. I would prefix the env var to avoid potential clashes, e.g.
env: SKAFFOLD_CONTEXT
cli: --context
or
env: SKAFFOLD_K8S_CONTEXT
cli: --k8s-context
I had actually implemented @ahmetb's suggestion (#1540) before discovering this issue. I'm open to suggestions to address @dlorenc's comments regarding portability. Should an environmental override be the way to go? If so, does that mean the context property shouldn't be included in the schema?
I would suggest supporting the context property, falling back to using the current kubectl context, if an environmental override isn't set. This supports both users and organisations using developer-specific context names, and those using constant context names. The order of precedence would be environment, schema, kubectl context.
Update: I've implemented the above.
I think picking up the active kubectl context is not a great idea especially because users can be switch back and forth very frequently.
I ran into the same problem, sometimes I find skaffold pushing/deploying images to a remote cluster while I was de locally. Being able to set a kube-context would be a valuable option here.
Also, when the kube-context changes, it seems that skaffold dev responds to that. In my case, I'm running skaffold dev for the local next project I'm working on, but had to fix a bug in production. When switching the kube-context, I suddently find skaffold deploying to that cluster.
@vdboor can you check if the latest release still has the problem of responding to context changes? Context handling received some attention and that specific issue should be fixed now. BTW, further options to configure the kube-context are in the pipeline.
This is now implemented both as a flag and as deploy.kubeContext in the YAML.
This is now implemented both as a flag and as
deploy.kubeContextin the YAML.
Curios, does the flag takes precedence?
@ivanporty Yes, CLI _always_ takes precedence. The docs for this are in the making: https://github.com/GoogleContainerTools/skaffold/pull/2992/files?short_path=3223dfc#diff-3223dfc377d418dcd1f5839d0235685b
Most helpful comment
This is now implemented both as a flag and as
deploy.kubeContextin the YAML.