I don't think #868 was adequately resolved by adding a CLI flag as a workaround (--load_restrictor none). The security concerns that led to the flag are real, but the solution is head-buttingly frustrating. There seems to be simply no sane way to share patches between overlays without that CLI flag, and it shouldn't IMO be necessary to switch off a security feature in order to do a "normal" build. What users need is a way to restrict patch locations to non-remote locations, but allow sharing patches between overlays for the same application. One way would be to define the "true root" of all the overlays (normally a parent or grandparent directory) in some convenient way (could be a marker file, e.g. a .git directory). Another would be to allow patches to always live in sibling (../**) or cousin (../../**) directories. Another would be to allow the kustomization.yaml to declare itself what paths are allowed for its own patches.
Another option would be support for environment variables to configure the behavior of kustomize. Another option would be to allow any kustomization to state that no external dependencies are allowed further down the tree (my project doesn't use any external dependencies and thus doesn't care about this security concern).
@dsyer I've hit so many issues like this I now have a wrapper script that my entire team uses to paper over the missing pieces. From this operator's perspective the kustomize project is an experiment in what managing config could possibly be like in an idealized world that doesn't map to a reality any of us are going to experience outside of a totally greenfield project with no external dependencies and no complexity authored by someone intimately familiar with this codebase.
Interestingly, you'll find that similar issues to this one keep popping up every
now and then. Even the wrapper script route does not surprise me, since I'm
aware of at least one more project that has implemented something along these
lines.
I've created #1251 to define what is, in my opinion, the underlying root cause
between all these issues. It goes one step further than patch sharing, but a
solution for it should be general enough to cover your needs as well. Feel free
to chime in, if it fits your needs.
An (admittedly cumbersome) solution for now is to define your shared patches as
[PatchTransformers], and point to them from your various overlays. These
transformers are not subject to load restrictions, so you can use them from any
overlay, without the --load-restrictor none flag.
An (admittedly cumbersome) solution for now is to define your shared patches as
PatchTransformers, and point to them from your various overlays.
Would you be so kind as to provide an example for this?
I have tried it and the patches sadly do not get applied at all.
My failed approach looked like this:
# Referencing remote resource in my overlay like this
resources:
- my-base/
- https://gitlab.myhost.com/kustomize-presets.git//deployment-patch-test
# deployment-patch-test/kustomization.yaml
resources:
- test-patch.yml
# test-patch.yml
apiVersion: builtin
kind: PatchTransformer
metadata:
name: not-important-to-example
patch: '[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "nginx:latest"}]'
target:
kind: Deployment
Sure, we have a repo with an example actually, to illustrate how it's done: https://github.com/ioandr/kustomize-example/tree/master/diamond-with-patches-transformers.
In your case, the problem is that the PatchTransformer cannot be applied like this. The process is a bit more involved: you have to use the transformers directive, which should point to a kustomization directory, which in turn should include the custom PatchTransformer. That's where the "cumbersome" part in my previous comment fits...
Proposed an enhancement in #2167 to address this problem.
I believe this is related to #1251
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
How does this work? It's definitely not stale, unless it's somehow now ruled out that anything might change.
Now that Kustomize has support for components, there is an easy way to share patches across overlays. Simply create a component directory, add the patches in it and include the component in each overlay using the components: directive.
Here's an example on how to do this: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
@dsyer Does components address your problem?
Not sure really. Still waiting for some docs (https://kubernetes-sigs.github.io/kustomize/api-reference/kustomization/components/ is empty).
UPDATE: scratch that, I just found this: https://kubernetes-sigs.github.io/kustomize/guides/components/ (same content as the link posted by @apyrgio above). It's a bit thin as user docs go, but nice to have an example, and I could figure it out from there. Seems to work for me.
Great, I have an issue for the components doc. #3090
Most helpful comment
Another option would be support for environment variables to configure the behavior of kustomize. Another option would be to allow any kustomization to state that no external dependencies are allowed further down the tree (my project doesn't use any external dependencies and thus doesn't care about this security concern).
@dsyer I've hit so many issues like this I now have a wrapper script that my entire team uses to paper over the missing pieces. From this operator's perspective the kustomize project is an experiment in what managing config could possibly be like in an idealized world that doesn't map to a reality any of us are going to experience outside of a totally greenfield project with no external dependencies and no complexity authored by someone intimately familiar with this codebase.