Creating the entries in the var section can be a quite long process if the number of variable increases and get the kustomization.yaml hard to read:
For instance:
- name: SoftwareVersions.software-versions.spec.images.mysql.tag
objref:
apiVersion: my.group.org/v1alpha1
kind: SoftwareVersions
name: software-versions
fieldref:
fieldpath: spec.images.mysql.tag
- name: CommonAddresses.common-addresses.spec.dns.upstream_servers[2]
objref:
apiVersion: my.group.org/v1alpha1
kind: CommonAddresses
name: common-addresses
fieldref:
fieldpath: spec.dns.upstream_servers[2]
On the varReference side of the process, the process make complicated very quickly:
For K8s standard objects (if not part of the default configuration)
varReference:
- kind: Deployment
path: spec/template/spec/containers/image
- kind: Deployment
path: metadata/labels
- kind: Deployment
path: spec/template/metadata/labels
- kind: Deployment
path: spec/selector/matchLabels
- kind: Deployment
path: spec/template/spec/initContainers
or
For CRD:
varReference:
- kind: Chart
path: spec/values/endpoints/messaging/auth/user/password
- kind: Chart
path: spec/source
- kind: Chart
path: spec/values/images
- kind: Chart
path: spec/values/labels
This proposal aims at the automatic creation of the var and varReference section by simply scanning for the resources:
If we take the following example:
apiVersion: my.group.org/v1alpha1
kind: Chart
metadata:
name: wordpress
spec:
source: $(SoftwareVersions.software-versions.spec.charts.wordpress)
values:
......
pod:
replicas:
api: 1
we can conclude that we need to create the following varReference
varReference:
- kind: Chart
path: spec/source
...
and the following var
vars:
- name: SoftwareVersions.software-versions.spec.charts.wordpress
objref:
apiVersion: GROUP/VERSION
kind: SoftwareVersions
name: software-versions
fieldref:
fieldpath: spec.charts.wordpress
By adding turning on the autoconfig feature, the user would only have to create the entries for which the detection process would have failed.
ack, will circulate -
@monopole @Liujingfang1 @ian-howell
For info, we did not really change anything in a week to that code. Just rebasing on regular bases to ensure the the PR is still working as expected. Going through the issues we found multiple people who had the issue or were confused because they forgot to change the varReference even for standard objects See this issue
A key concept to this PR is that it assumes that the user knows better, that what was loaded through the varReference kustomizeconfig.yaml, varReference.go as well as the kustomization.yaml is correct.
This was really useful when automatic discovery algorithm is failing, the user just has to add it manually like he always did till here.. This also ensure backward compatibility with the current kustomize 2.x
The real conceptual problem is that even without the current PR the varReference.go is not really up to date:
We started to update it like here to test int as a variable, but conceptually every field in every K8s object could end up in that file. That file would become huge. With the current PR, we don't need to add anything to it. Understanding the syntax of the VarReference is also really difficult especially when nagivating map/slice/map structures.
We always check that those PR work together one top of the latest version of kustomize because we merge them into the allinone branch at regular base. We added a lot of test in examples/allionone and examples/issues...but we have a test that pushes kustomize much more that that all the other test: In treasuremap run "make deploy-airsloop".
Finally the Airship community which owns Treasuremap really likes what kustomize can do. The organization of the structure of treasuremap is really like a tree, which fits perfectly current kustomize code.
But during the latest meeting discussing the organization of those folders, the idea of facets/services....to compose the overall document came on the table. We are really close of those multiple inheritences issues with C++, Java or Go and how you are supposed to use interfaces. Still it explains our interest in proposing a solution which addresses a solution to compose on CRD or K8 native object from multiple kustomize base folders See
Is there any plan to merge this PR? That's a really valuable feature.
If not a merge, some confirmation that this feature is out of scope or otherwise undesirable to the kustomize team would be useful for folks who are presently depending on the fork that adds it. At the moment I'm perhaps foolishly depending on/socializing this feature with my team (with fingers crossed it will someday appear in kubectl proper).
I'm sensitive to the fact that my needs are far from a driving motivation in the development of this free software (thank you, by the way!) but it seems worth sharing if it might help nudge a review of this functionality into being.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@tkellen looks like they abandoned all the work done for this https://github.com/kubernetes-sigs/kustomize/pull/1217
:sadface:
Most helpful comment
Is there any plan to merge this PR? That's a really valuable feature.