I hit the problem with kustomize 2.1.0, but that works fine with kustomize 2.0.3, I'm not sure is that broken by kustomize 2.1.0 or design behavior changed?
Base kustomization.yaml
[root@jinchi1 local]# cat ../base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
vars:
...
- fieldref:
fieldPath: data.batchSize
name: batchSize
objref:
apiVersion: v1
kind: ConfigMap
name: mnist-map-training
...
Local kustomization.yaml
[root@jinchi1 local]# cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
.....
configMapGenerator:
- literals:
- name=mnist-train-local
- batchSize=100
....
name: mnist-map-training
[root@jinchi1 local]# kustomize build .
Error: var '{batchSize ~G_~V_ConfigMap {data.batchSize}}' cannot be mapped to a field in the set of known resources
[root@jinchi1 local]# kustomize version
Version: {KustomizeVersion:2.1.0 GitCommit:af67c893d87c5fb8200f8a3edac7fdafd61ec0bd BuildDate:2019-06-18T22:01:59Z GoOs:linux GoArch:amd64}
But that works fine with kustomize v0.2.3. Could someone to take a look? thanks a lot!
/cc @monopole @Liujingfang1
Any suggestion for this? Or recommend someone to take a look? Great thanks!
Did more deep tests, if both of vars and configMapGenerator are in same kustomization.yaml, that's OK. In other words, both the vars and configMapGenerator need to in base/kustomization.yaml or both in local/kustomization.yaml. But failed if vars defined in base, and configMapGenerator defined in local/kustomization.yaml.
Personally we should fix that, for reasons:
vars path will be same, user would like to set in the base, but the value is not same, user would like to set in local.nice report thanks.
any chance you could help us by writing a test demonstrating this that we can keep for regression coverage?
like this but focussed on your particular case?
we're a little shorthanded here. :)
Not sure I have chance for this, since many workloads accumulation :-( apologize... but will do that if get chance. I think we can fix that and then writing a test demonstrating.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
I see https://github.com/kubernetes-sigs/kustomize/pull/1316 was closed without merging. The example that led to this seems to still have this problem in the most recent versions of kustomize v2 and v3. Are we any closer to deciding whether this is to be considered a bug that should be fixed or if it's to be thought of as by design?
The problem still exists in kustomize v3.4.0.
# /test123/kustomize version
{Version:kustomize/v3.4.0 GitCommit:2c9635967a2b1469d605a91a1d040bd27c73ca7d BuildDate:2019-11-12T05:00:57Z GoOs:linux GoArch:amd64}
# /test123/kustomize build .
Error: var '{batchSize ~G_v1_ConfigMap {data.batchSize}}' cannot be mapped to a field in the set of known resources
/remove-lifecycle rotten
/remove-lifecycle stale
@jbrette @monopole Could you please help to double confirm and fix this? thanks a lot!
/P1
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@jbrette @monopole Could you please help to double confirm and fix this? thanks a lot!
/P1