When I include a base that defines and uses a variable twice, the variable is not expanded.
Example:
component1/kustomization.yaml:
resources:
- resources.yaml
vars:
- name: VAR
objref:
apiVersion: v1
kind: Pod
name: component1
component1/resources.yaml:
apiVersion: v1
kind: Pod
metadata:
name: component1
spec:
containers:
- name: component1
image: bash
env:
- name: VAR
value: $(VAR)
app1/kustomization.yaml:
bases:
- ../component1
namePrefix: app1-
app2/kustomization.yaml:
bases:
- ../component1
namePrefix: app2-
kustomization.yaml:
bases:
- app1
- app2
Output:
apiVersion: v1
kind: Pod
metadata:
name: app1-component1
spec:
containers:
- env:
- name: VAR
value: $(VAR)
image: bash
name: component1
---
apiVersion: v1
kind: Pod
metadata:
name: app2-component1
spec:
containers:
- env:
- name: VAR
value: $(VAR)
image: bash
name: component1
Output when you comment out one of the app bases in the toplevel kustomization:
apiVersion: v1
kind: Pod
metadata:
name: app1-component1
spec:
containers:
- env:
- name: VAR
value: app1-component1
image: bash
name: component1
Note: this example is silly, but it shows the problem succinctly. In my case, I have an application that uses a variable to discover a database service name to connect to. That application is then used as a base multiple times in order to deploy multiple versions of it.
I can explain why it behaves this way. When kustomize resolves $VAR, it first resolve the object the $VAR is referred to. Here since $VAR is defined in the base. From the top level overlay, there are two objects that this $VAR can be associated with. When you comment one, there is a unique object the $VAR associated with. Thus the substitution happens.
Sure, but the variable is resolved on the lower level, right? It means some kustomizations cannot be reused, because they use variables?
Would you say this is a design decision or could it be "fixed"?
I wouldn't say this is a design decision. A common base should be able to use in different overlays. However, fixing it could be tricky.
Unfortunately, I do not know enough Go to be useful here. :disappointed:
To fix this issue, we can filter the resources by either prefix or namespace as we did for namereference transformation
I'm also having this problem, is there any news if and when this will be fixed?
@jcassee @Shalucik Please a look and help test PR
There is still a big issue with variable pointing at name which get transformed, but 1253 could potentially be a path forward. The environment used to reproduce your issue is Here
The kustomization.yaml contains two variables...one pointing to the name, the other to the image:
resources:
- resources.yaml
vars:
- name: POD_NAME
objref:
apiVersion: v1
kind: Pod
name: component1
fieldref:
fieldpath: metadata.name
- name: IMAGE_NAME
objref:
apiVersion: v1
kind: Pod
name: component1
fieldref:
fieldpath: spec.containers[0].image
The resource.yaml
apiVersion: v1
kind: Pod
metadata:
name: component1
spec:
containers:
- name: component1
image: bash
env:
- name: POD_NAME
value: $(POD_NAME)
- name: IMAGE_NAME
value: $(IMAGE_NAME)
The output is the following. Note that POD_NAME in app2-component1 is still wrong...but we are making progress.
apiVersion: v1
kind: Pod
metadata:
name: app1-component1
spec:
containers:
- env:
- name: POD_NAME
value: app1-component1
- name: IMAGE_NAME
value: bash
image: bash
name: component1
---
apiVersion: v1
kind: Pod
metadata:
name: app2-component1
spec:
containers:
- env:
- name: POD_NAME
value: app1-component1
- name: IMAGE_NAME
value: bash
image: bash
name: component1
/cc @monopole @Liujingfang1
@jcassee @Shalucik I think we finally nailed it. Have a look here.
and in the automatic go tests
Your need to use this PR
kustomize build .
apiVersion: v1
kind: Pod
metadata:
name: app1-component1
spec:
containers:
- env:
- name: POD_NAME
value: app1-component1
- name: IMAGE_NAME
value: bash
image: bash
name: component1
---
apiVersion: v1
kind: Pod
metadata:
name: app2-component1
spec:
containers:
- env:
- name: POD_NAME
value: app2-component1
- name: IMAGE_NAME
value: bash
image: bash
name: component1
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
An idea I'm experimenting with is to define a kustomize generator (or kpt functions) that simply imports the result of running kustomize build on the base kustomization.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@jcassee @Shalucik I think we finally nailed it. Have a look here.
and in the automatic go tests
Your need to use this PR