Hi,
I'm trying to simplify the structures of my yaml files. I segregated the common configs and secrets so I can re-use it into my deployment manifests.
I used:
# sample file.yaml
configMapGenerator:
name: my-config
envs: #env
- config.env
Then I ran:
$ kubectl apply -k .
error: AccumulateTarget: couldn't make target for path/to/config/folder: json: unknown field "envs"
couldn't make target for path/to/config/folder: json: unknown field "envs"
AccumulateTarget: couldn't make target for base: json: unknown field "envs"
After changing it from envs to env then ran:
$ kustomize build .
Error: accumulating resources: recursed accumulation of path 'path/to/configs/folder': accumulating resources: couldn't make target for path 'path/to/configs/folder/subfolder': json: unknown field "env"
/help
@tamipangadil:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/bug
Seems like this may be a backwards compatibility issue with the 2 version of kubectl
The same thing happens with other commands such base.. How can we find documentation in regards to compatibility between kustomize build and kubectl apply -k?
I didn’t rely on kubectl anymore to have a fix the differences. I just run
‘kustomize build . | kubectl apply -f -‘ whenever I want to deploy.
On Sat, 28 Mar 2020 at 21:57, Marián Hlaváč notifications@github.com
wrote:
Had similar error message, it seems that there might be some kind of issue
with YAML parsing.I had this in my kustomization.yaml:
secretGenerator:
- name: platform
env: secrets/platform.env- name: database
env: secrets/database.envWhich resulted in correctly processed through kubectl apply -k ..., but
failed when ran kustomize build ... on
Error: accumulating resources: couldn't make target for path
'.../platform/spec/base': json: unknown field "env".After changing the contents of the file to (notice the indentation):
secretGenerator:
- name: platform
env: secrets/platform.env- name: database
env: secrets/database.envit goes through without a problem.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kustomize/issues/2205#issuecomment-605524839,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAJ727EJJF6YRILBBDTBZXLRJZXC7ANCNFSM4KWUIOBA
.
So, what is the proposed solution here?
I just ran into this issue as well and didn't realize until I created #2378.
kubectl apply -k to use envs instead of envkustomize build to use env instead of envsenv and envsI think it makes sense to fix kubectl apply -k to use envs but then this issue now belongs to kubectl?
To the maintainers...
I really don't know what the desired state is, but kubectl and kustomize differ:
Can we easily see which kustomize module is part of kubectl?
Why don't we finalize v1 of the kustomize spec (still v1beta1) if we are already advertising the use of kustomize in normal Kubernetes kubectl operations?
I can find this: https://github.com/kubernetes/kubectl/blob/master/go.mod#L46 which shows that kubectl is still using 2.0.3 of kustomize. Seems very old!
The latest kustomize version that understands "env" in configMapGenerator is 3.2.3
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Please take a look at this issue for the reason that latest kustomize cannot be shipped with kubectl.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Most helpful comment
So, what is the proposed solution here?
I just ran into this issue as well and didn't realize until I created #2378.
kubectl apply -kto useenvsinstead ofenvor
kustomize buildto useenvinstead ofenvsor
envandenvsI think it makes sense to fix
kubectl apply -kto useenvsbut then this issue now belongs to kubectl?