Hello 👋
When using a ConfigMapGenerator in an overlay with nested bases, the generated hash is not applied on the patch referencing the configMap. Here are the details:
File tree:
── base1
├── kustomization.yaml
└── app-deployment.yaml
── base2
├── kustomization.yaml
└── app-deployment-patch.yaml
└── app-other-service.yaml
── overlays
├── kustomization.yaml
├── config-file.json
└── app-other-deployment-patch.yaml
base1/kustomization.yaml only references some resources:
resources:
- app-deployment.yaml
base2/kustomization.yaml references the first base, with some other resource, a patch and metadata:
bases:
- ../base1
namespace: appName
namePrefix: prefix-
commonLabels:
app: appName
resources:
- app-other-service.yaml
patches:
- app-deployment-patch.yaml
overlay/kustomization.yaml references the second base, with a configmap generator and a patch using them:
bases:
- ../base2
namespace: appName
commonLabels:
app: appName
patches:
- app-other-deployment-patch.yaml
configMapGenerator:
- name: config
files:
- config-file.json
overlay/app-other-deployment-patch.yaml patches an existing deployment with configMap volume mounting:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
- name: app
#[REDACTED]
volumeMounts:
- name: config-volume
mountPath: "/path/to/file"
volumes:
- name: config-volume
configMap:
name: config
In the generated yaml, it is expected to get the hash in the configMap volume definition, but it is not the case:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: appName
name: prefix-app
namespace: appName
spec:
selector:
matchLabels:
app: appName
name: app
template:
metadata:
labels:
app: appName
name: app
spec:
containers:
- name: name
volumeMounts:
- mountPath: /path/to/file
name: config-volume
volumes:
- configMap:
name: config
name: config-volume
This issue might be related to the #561 and #662.
It might be cause by one of the nested base using the `namePrefix' annotation ?
Any help would be greatly appreciated :wink:
Temporary fix: use a dummy configMapGenerator in the base, and override it with behavior: replace in the overlay. This way the hash is correctly generated and bounded.
But this is not the expected behavior, and produce noise in the kustomization.yaml.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I have the same issue with something like:
base/
app1/
base/
overlays/
web/
worker/
[...]
overlays/
prod/
In this case I create a ConfigMap in the prod overlay and the hash isn't modified in the app1 overlays. Unfortunately adding an empty ConfigMap in app1 base would create multiple resources.
@Liujingfang1 maybe add a test to cover this, like #1278.
@bgauduch Can you check that it is fixed by this PR
Reproduce your environment here Issue 710
kustomize build overlay/
apiVersion: v1
data:
config-file.json: ""
kind: ConfigMap
metadata:
labels:
app: appName
name: config-79tktd9hkb
namespace: appName
---
apiVersion: apps/v1
kind: Service
metadata:
labels:
app: appName
name: prefix-app
namespace: appName
spec:
selector:
app: appName
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: appName
name: prefix-app
namespace: appName
spec:
selector:
matchLabels:
app: appName
template:
metadata:
labels:
app: appName
spec:
containers:
- env:
- name: ANOTHERENV
value: ANOTHERVALUE
name: app
volumeMounts:
- mountPath: /path/to/file
name: config-volume
- image: anotherimage
name: anothercontainer
volumes:
- configMap:
name: config-79tktd9hkb
name: config-volume
Hello @jbrette !
Awesome work on the exemple for testing 👍 It makes it way easier to dive back in after a few month 😅
I just had to make some updates to make it work on my machine:
resources: by bases: for the overlays to work;I've just tested on my machine and I still see the issue using kustomize v2.0.3.
Your PR is not merged yet so we have to wait for a release to test again and close this issue, right ?
@bgauduch Just release a version 3.1.0, can you try it?
@Liujingfang1 I just tested on 3.1.0 and I have the same issue.
@bgauduch According to the test we created to reproduce the original issue, the bug is fixed.
$HOME/bin/kustomize.3.1.0 version
Version: {KustomizeVersion:3.1.0 GitCommit:95f3303493fdea243ae83b767978092396169baf BuildDate:2019-07-26T18:11:16Z GoOs:linux GoArch:amd64}
pwd
~/src/sigs.k8s.io/kustomize/examples/issues/issue_0710
$HOME/bin/kustomize.3.1.0 build overlay/
apiVersion: v1
data:
config-file.json: ""
kind: ConfigMap
metadata:
labels:
app: appName
name: config-79tktd9hkb
namespace: appName
---
apiVersion: apps/v1
kind: Service
metadata:
labels:
app: appName
name: prefix-app
namespace: appName
spec:
selector:
app: appName
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: appName
name: prefix-app
namespace: appName
spec:
selector:
matchLabels:
app: appName
template:
metadata:
labels:
app: appName
spec:
containers:
- env:
- name: ANOTHERENV
value: ANOTHERVALUE
name: app
volumeMounts:
- mountPath: /path/to/file
name: config-volume
- image: anotherimage
name: anothercontainer
volumes:
- configMap:
name: config-79tktd9hkb
name: config-volume
I noticed that in my case, I have an overlay that is nested on level deeper.
@jbrette I cannot test again at the moment so I'm gonna have to take your word for it ! I believe this issue can now be closed.
@benjamin-bergia that's a lot of overlays 😉Maybe you should open a new issue with this specific use-case ?
/close
@bgauduch: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@bgauduch Same not much time at the moment. But you're right when I will be working on it again I will open a different issue