apiVersion and kind are currently optional in a kustomization.yaml file. When kustomize build executes on a kustomizaiton.yaml file without those fields, it prints some warning messages.
apiVersion is not defined. This will not be allowed in the next release.
Please run `kustomize edit fix`
kind is not defined. This will not be allowed in the next release.
Please run `kustomize edit fix`
When integrating kustomize into kubectl, the warning messages don't make sense since kustomize edit fix is not available through kubectl.
It is a good time to think about how to handle apiVersion/kind. Should they be required or optional? I try my best to list the trade-offs.
v1beta1 ; kind is KustomizationKustomize build exits when doesn't see apiVersion and kindv1beta1.v1, we can default it to v1.
apiVersion is not defind, defaults to v1beta1
kind is not defined, defaults to Kustomization
@monopole @pwittrock @liggitt Any thoughts about this?
@droot
- kustomization.yaml that is missing apiVersion/kind will be treated the same as with
v1beta1.
In the future, when we move the version tov1, we can default it tov1.
not sure I understand that option... that sounds like a file that does not declare its version is treated as v1beta1 and v1 depending on which version of kustomize processes it, which doesn't seem good
if there's a significant number of existing files that would be affected by making it required, loudly complaining/warning (as we do now), indicating what version (v1beta1) the file is being treated as, and indicating it will be made required in the next version of kustomize seems ok
out of curiosity, in what version did this warning show up (and what specific "next version" of kustomize was this anticipated to be made required in?)
not sure I understand that option... that sounds like a file that does not declare its version is treated as v1beta1 and v1 depending on which version of kustomize processes it, which doesn't seem good
Maybe I shouldn't say which version it defaults to. Later when we have a v1 version, kustomize should be able to accept the kustomization.yaml with an implicit v1beta1.
out of curiosity, in what version did this warning show up (and what specific "next version" of kustomize was this anticipated to be made required in?)
So far, we have releases until 1.0.11. We added this warning recently after 1.0.11 release. We plan to release 2.0.0 soon since there is some other backward incompatible changes. Then the kustomize kubectl integration will vendor kustomize 2.0.0.
If we keep this warning, we will keep it for the whole cycle of version 2. Then in version 3.0.0, we can make those fields required.
We added this warning recently after 1.0.11 release. We plan to release 2.0.0 soon since there is some other backward incompatible changes. Then the kustomize kubectl integration will vendor kustomize 2.0.0.
Considering 2.0.0 the "next version" that makes it required would be my preference, especially so kubectl doesn't start life with this command with deprecated warnings. That said, my perspective is definitely that of a newcomer on this command, so I'd be glad to hear others' thoughts.
_requiring_ a Group/Version Kind
Upside :+1:
Downside :-1:
configuration.k8s.io.commonLabel: foo
to
kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1
commonLabel: foo
See also the noise added to the examples in #735
At the point where we say _let's make this an API object_ we can decide that any Kustomization missing the fields (but otherwise parsing OK) is grandfathered in with the proper GVK, and we already offer a tool to add the fields to Kustomization files (kustomize fix).
Does a door close if we don't require it immediately?
We must immediately decide which Group to use in apiVersion.
The group in apiVersion is not necessary. A similar case is kubectl configuration files where the apiVersion is v1 and the kind is Config.
User annoyance with no benefit (a Kustomization is not an API object, and there are no plans to make it one)
Agreed.
The group in apiVersion is not necessary. A similar case is kubectl configuration files where the apiVersion is v1 and the kind is Config.
For server types unspecified implies core which is definitely wrong. Not sure what this means for client only types.
For prior art - the kubeconfig file has a Kind Config and version v1 (with no group).
Can we make the kind / version required in the kubectl version but defer it as required in kustomize so we don't break folks?
For prior art - the kubeconfig file has a Kind
Configand versionv1(with no group).
fyi, that predated the existence of API groups as a concept. today, new config types are being placed in a <component>.config.k8s.io API group
@liggitt Who defines what the component is? Is that sig-arch? The sig owning the piece?
Can we make the kind / version required in the kubectl version but defer it as required in kustomize so we don't break folks?
We can do this, but that will be an inconsistency between kustomize and kubectl kustomize.
We can do this, but that will be an inconsistency between kustomize and kubectl kustomize.
How big of a problem is that?
We can do this, but that will be an inconsistency between kustomize and kubectl kustomize.
How big of a problem is that?
kustomization.yaml works in kustomize doesn't work in kubectl kustomize when missing those fields. For local bases/overlays, users can add those fields manually or by kustomize edit fix. For remote bases that users don't have control, this will fail and they couldn't fix it.
@Liujingfang1 - good point re remote bases. we shouldn't worry about it because
If we're worried about the ResourceBuilder in kubectl apply -f complaining about kustomization.yaml files we could change that code path to ignore the file name [K|k]ustomization[.yaml|.yml], until if and when that code path is supposed to actively honor them.
So i'm for passively allowing the fields
kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1
erroring on any other values (if the fields are present), and not requiring the fields until we have a use case for the Kustomization as an API object, e.g. server-side apply uses them (a wild new use case).
Yeah, i'm for passively allowing the fields
kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1erroring on any other values (if the fields are present), and not requiring the fields until we have a use case for the Kustomization as an API object.
Does this solution sound good to you @liggitt @pwittrock ?
We could mention this issue at the sig-cli meeting and see if anyone else has thoughts.
We're eager to put a new release out. We already have to go to v2 because of other changes.
But if we push v2 right now, and then later decide these fields should be required, we'd be faced with v3 :)
+1. Sending out an email to sig-cli saying it will be discussed at the next sig meeting and we plan to make a decision if we have lazy consensus at that time.
@liggitt Who defines what the component is?
it has typically is a form of the consuming binary name. config groups we have are:
apiserver.config.k8s.io
cloudcontrollermanager.config.k8s.io
kubelet.config.k8s.io
kubeproxy.config.k8s.io
kubescheduler.config.k8s.io
kubecontrollermanager.config.k8s.io
i'm for passively allowing the fields
kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1erroring on any other values (if the fields are present), and not requiring the fields until we have a use case for the Kustomization as an API object.
Does this solution sound good to you @liggitt @pwittrock ?
As an approach, yes. As a specific group name, kustomize.config.k8s.io fits the mold and seems reasonable if the command is going to continue to run as a distinct command
As an approach, yes. As a specific group name,
kustomize.config.k8s.iofits the mold and seems reasonable if the command is going to continue to run as a distinct command
:+1: from me for kustomize.config.k8s.io and from start I was forcing the requirement for apiVersion and kind so :+1: on that too, especially that for kubectl users it'll be a new thing and as @monopole mentioned for existing users v2 is viable option. v1 can be with the warning.
Fine with kustomize.config.k8s.io.
Still haven't heard a user or developer benefit to _requiring_ these API fields in a non API object.
In favor of _allowing_ them to be present, and demanding that if present they have particular values, and that those particular values are the assumed values if the fields are missing. Everyone wins :)
documented the versioning policy here
Kustomize newbie here, had to do a fair bit of searching to understand why i can't use kubctl with kustomization.yaml. Saw quite a lot of articles of why kustomize is so much better than helm so it was even merged in kubectl.
RIght now I'm getting:
位 kubectl apply -f ./
configmap/the-map unchanged
error: unable to recognize "kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
with
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
in my kustomization.yaml file. Is it still ok?
Thanks in advance!
@yellowmegaman Starting from 1.14, you can use kubectl apply -k ./.
Before 1.14 is released, you need to download the kustomize binary and run following command
kustomize build ./ | kubectl apply -f -
Thanks a bunch @Liujingfang1, awesome!
@Liujingfang1
Really baffled about getting
kubectl apply -f .
to work here, and am hoping you could give me a hand.
We are running v1.15.0:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Adding the following
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
to kustomization.yaml spits out
$ kubectl apply -f ./
...
unable to recognize "kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
and removing the above returns
$ kubectl apply -f ./
error validating "kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
In addition, when creating a cluster role binding, it errs out with
Error from server (Invalid): error when creating "cluster-role-binding.yaml": ClusterRoleBinding.rbac.authorization.k8s.io "xxx-clusterrole-binding" is invalid: subjects[0].namespace: Required value
My understanding is that we don't need to specify the namespace for a ClusterRole?
Nevertheless, it works with
kubectl kustomize | kubectl apply -f -
@mapshen use kubectl apply -k ./ instead. -f doesn't work with kustomization.
Argh...everything worked like a charm with -k. Thanks so much for pointing this out!!! @liuhuiping2013
Most helpful comment
@mapshen use
kubectl apply -k ./instead.-fdoesn't work with kustomization.