โ kustomize version
Version: {KustomizeVersion:3.2.0 GitCommit:a3103f1e62ddb5b696daa3fd359bb6f2e8333b49 BuildDate:2019-09-20T10:10:22+02:00 GoOs:darwin GoArch:amd64}
The command kustomize edit add base ../base should add a base to the kustomization file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
Running the kustomize edit add base ../base command results in this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
~
โ mkdir test
~
โ cd test
~/test
โ mkdir base
~/test
โ mkdir overlay
~/test
โ cd base
~/test/base
โ touch kustomization.yaml
~/test/base
โ cd ..
~/test
โ cd overlay
~/test/overlay
โ touch kustomization.yaml
~/test/overlay
โ kustomize edit add base ../base
~/test/overlay
โ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
Hi, see https://github.com/kubernetes-sigs/kustomize/blob/master/docs/v2.1.0.md#resources-expanded-bases-deprecated
Would it be possible for kustomize to explicitly warn about these deprecation on stderr?
Hi, see https://github.com/kubernetes-sigs/kustomize/blob/master/docs/v2.1.0.md#resources-expanded-bases-deprecated
Thanks for pointing that out!
The issue is, I'm trying to apply my newly created kustomization.yaml file, after the relocation, and I get:
$ kubectl apply -k .
error: rawResources failed to read Resources: Load from path ../base failed: '../base' must be a file (got d='/Users/bruno.medeiros/git/apps-portal/base')
It seems that kubectl is failing when there is a resource that is a directory, not a file.
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:42:50Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.7-eks-e9b1d0", GitCommit:"e9b1d0551216e1e8ace5ee4ca50161df34325ec2", GitTreeState:"clean", BuildDate:"2019-09-21T08:33:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Do you know where can I report that?
Another thing that is quite confusing with the deprecation is that only using resources breaks kubectl apply -k, since kubectl doesn't really know about that new feature yet...
So now if you edit kustomizations with the kustomize cli and not by hand they won't be compatible with kubectl.
eg.:
โ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
โ kubectl apply -k local
error: rawResources failed to read Resources: Load from path ../base failed: '../base' must be a file (got d='/Users/reegnz/example/base')
My kustomization in that case looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
When can one expect a feature to also drip downstream into kubectl?
Also how can I find out what features are in downstream kubectl and what I cannot use with kubectl apply -k?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I am still getting this error.
$ kustomize edit add base ../../../bases/environment/staging-base
resources:
- Ingress.yml
- ../../../bases/microservice/services/account
- ../../../bases/environment/staging-base
/remove-lifecycle stale
@tomjohnburton @reegnz https://github.com/kubernetes-sigs/kustomize/issues/1647#issuecomment-567367909 It worked for me as well using bases: instead of resources
working on windows docker for desktop
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Another thing that is quite confusing with the deprecation is that only using
resourcesbreaks kubectl apply -k, since kubectl doesn't really know about that new feature yet...So now if you edit kustomizations with the kustomize cli and not by hand they won't be compatible with kubectl.
eg.:
โ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} โ kubectl apply -k local error: rawResources failed to read Resources: Load from path ../base failed: '../base' must be a file (got d='/Users/reegnz/example/base')My kustomization in that case looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../baseWhen can one expect a feature to also drip downstream into kubectl?
Also how can I find out what features are in downstream kubectl and what I cannot use with kubectl apply -k?
hey are there any news?
@pommelinho Didn't use kustomize for a while now, but checking the go.mod for kubectl:
https://github.com/kubernetes/kubectl/blob/18e781fa774127786b5a2092ed2dc0351dafdb87/go.mod#L46
Nope, it's still the same kustomize version that's used. :(
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
/reopen
@pierluigilenoci: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@Shell32-Natsu: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Another thing that is quite confusing with the deprecation is that only using
resourcesbreaks kubectl apply -k, since kubectl doesn't really know about that new feature yet...So now if you edit kustomizations with the kustomize cli and not by hand they won't be compatible with kubectl.
eg.:
My kustomization in that case looks like this:
When can one expect a feature to also drip downstream into kubectl?
Also how can I find out what features are in downstream kubectl and what I cannot use with kubectl apply -k?