Kustomize currently interprets each overlay as a full set of resources and patches: each patch can modify only resource which is listed directly in resources: or indirectly within bases:
This means that collecting resource and patch group together to use them later is impossible.
Currently I can create following overlay hierarchy:
base without any bases: defines cluster skeleton with services and podsbase_debug which inherits base and enables debug tools included into containers (these debug tools disabled by default to simplify using the same container on production and in test environment)base_debug_aws which adds AWS configs and secrets for servicesbase_debug_aws_scale_hard which adds a lot of replicas for each service to test horizontal scalingtest_develop which contains configuration for concrete test environment available at concrete domainThis looks like inheritance in OOP (with multiple inheritance when you have multiple bases).
Of course, I can mix changes from steps 2-4 into base or final overlays. I also can create several bases (one for skeleton and a few bases for things like secrets/configmaps) and maintain patches in place from which multiple overlays can re-use this patch.
But from my point of view, it's better to allow mixins: overlays that contains patches for resources that overlay doesn't have. Like this:
base defines pods a_deployment.yaml and b_deployment.yamldebug defines patch a_deployment_debug.yaml, but includes neither base to it's bases nor a_deployment.yaml to it's resourcesaws adds new resource aws_secret.yaml and new patch b_deployment.yamlscale_hard adds patches with a lot of replicas for a_deployment.yaml and b_deployment.yamltest_develop combines base, debug, aws, scale_hard in predicted order.It's possible to change kustomize in (at least) two ways:
bases list to patch resources defined in previous base in the same list:apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base # defines pods A and B
- ../mixins/debug # defines patch for A, but does not add A as resource or `base` as it base kustomization
- ../mixins/aws # defines patch for B, but does not define B
mixins: key which adds Mixin, which is overlay which can patch resources that it does not define, and process mixins when all bases are already processed.Looks like related to #727
It would be very useful to be able to "compose" an application from individual components, without having to rely on inheritance chain.
As example, application base layer should be able to refer to a database service, which is not part of the application, nor should the application base layer extend it. Instead, the final kustomize layer should be able to mix and match the application with possible different DB sizes / bases.
I really like the idea of a sperate key like mixins: allowing to load a group of patches without the need to reference bases there.
We also would like to have a structure like
.
โโโ kustomize
โย ย โโโ base
โย ย โย ย โโโ # all the base services / resources
โย ย โโโ overlay
โย ย โย ย โโโ # a collection of base services for different environments (prod/dev/etc.)
โย ย โโโ patches
โย ย ย ย โโโ # different sets of general patches (e.g. high availability changes etc.)
โโโ playbooks
ย ย โโโ # <*n physical environments>
โโโ patches
โย ย โโโ # specific patches
ย ย ย ย โโโ # uses an overlay as base, adds specific patches and some general patches
The last part (within playbooks) is kind of a pain because I have to point at each patch/resource within a general patch which has to be maintained duplicate times (over multiple playbooks) instead of pointing just to a collection.
Also doing patches this way is not possible right now throwing a security issue (can't going back to dir tree).
edit to get around the security issue a workaround would be to create a symlink pointing back in the file tree.
One use case for this would be a multi-tenant system with multiple release channels and resources allocations per tenants.
To do this currently, one would need to create one base for each size/release channel combination.
With mixins, this could be achieved by having one mixin to override images, and another one to set resources
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
This issue should stay alive. Lack of activity is no measure of interest here. We are waiting for something to actually happen.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Bump.
I created a PR for #1251 that essentially does this - #2168, though I added a new Kind (KustomizationPatch) that works as a 'mixin' from the initial comment in this thread. A better name for KustomizationPatch is needed though!
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
@pgpx looks like your https://github.com/kubernetes-sigs/kustomize/pull/2168 is related to https://github.com/kubernetes/enhancements/pull/1803
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Looks like components has solved this issue. https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md
/close
@Shell32-Natsu: Closing this issue.
In response to this:
Looks like
componentshas solved this issue. https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
This issue should stay alive. Lack of activity is no measure of interest here. We are waiting for something to actually happen.