Kustomize: require apiVersion/kind in Kustomization.yaml?

Created on 28 Jan 2019  路  29Comments  路  Source: kubernetes-sigs/kustomize

apiVersion and kind are currently optional in a kustomization.yaml file. When kustomize build executes on a kustomizaiton.yaml file without those fields, it prints some warning messages.

apiVersion is not defined. This will not be allowed in the next release.
Please run `kustomize edit fix`
kind is not defined. This will not be allowed in the next release.
Please run `kustomize edit fix`

When integrating kustomize into kubectl, the warning messages don't make sense since kustomize edit fix is not available through kubectl.

It is a good time to think about how to handle apiVersion/kind. Should they be required or optional? I try my best to list the trade-offs.

  • required

    • current version is v1beta1 ; kind is Kustomization

    • Kustomize build exits when doesn't see apiVersion and kind

    • In the future, Kustomize can switch among different behaviors based on different versions.

    • not backward compatible

  • optional

    • kustomization.yaml that is missing apiVersion/kind will be treated the same as with v1beta1.

      In the future, when we move the version to v1, we can default it to v1.

    • Not showing warning message or showing a different warning message. Maybe

      apiVersion is not defind, defaults to v1beta1 kind is not defined, defaults to Kustomization

    • backward compatible

@monopole @pwittrock @liggitt Any thoughts about this?

Most helpful comment

@mapshen use kubectl apply -k ./ instead. -f doesn't work with kustomization.

All 29 comments

@droot

  • kustomization.yaml that is missing apiVersion/kind will be treated the same as with v1beta1.
    In the future, when we move the version to v1, we can default it to v1.

not sure I understand that option... that sounds like a file that does not declare its version is treated as v1beta1 and v1 depending on which version of kustomize processes it, which doesn't seem good

if there's a significant number of existing files that would be affected by making it required, loudly complaining/warning (as we do now), indicating what version (v1beta1) the file is being treated as, and indicating it will be made required in the next version of kustomize seems ok

out of curiosity, in what version did this warning show up (and what specific "next version" of kustomize was this anticipated to be made required in?)

not sure I understand that option... that sounds like a file that does not declare its version is treated as v1beta1 and v1 depending on which version of kustomize processes it, which doesn't seem good

Maybe I shouldn't say which version it defaults to. Later when we have a v1 version, kustomize should be able to accept the kustomization.yaml with an implicit v1beta1.

out of curiosity, in what version did this warning show up (and what specific "next version" of kustomize was this anticipated to be made required in?)

So far, we have releases until 1.0.11. We added this warning recently after 1.0.11 release. We plan to release 2.0.0 soon since there is some other backward incompatible changes. Then the kustomize kubectl integration will vendor kustomize 2.0.0.

If we keep this warning, we will keep it for the whole cycle of version 2. Then in version 3.0.0, we can make those fields required.

We added this warning recently after 1.0.11 release. We plan to release 2.0.0 soon since there is some other backward incompatible changes. Then the kustomize kubectl integration will vendor kustomize 2.0.0.

Considering 2.0.0 the "next version" that makes it required would be my preference, especially so kubectl doesn't start life with this command with deprecated warnings. That said, my perspective is definitely that of a newcomer on this command, so I'd be glad to hear others' thoughts.

_requiring_ a Group/Version Kind

Upside :+1:

  • obviously, it gives us a way to do versioning in a way that k8s people recognize. This hasn't been an issue yet; we've just treated the file like a proto - if we can read it, even in a backward compatible way, then yay, else just complain about whatever field causes the problem. Someone who downloaded a binary six months ago is going to see some errors on our recent examples - but not a message saying "Hey, this K is version x, and this binary only supports x-1."

Downside :-1:

  • We must immediately decide which Group to use in apiVersion. Leaving it blank implies core which is wrong. Does sig-api care control this? I'd like configuration.k8s.io.
  • User annoyance with no benefit (a Kustomization is not an API object, and there are no plans to make it one). A simple kustomization to add a common label goes from
commonLabel: foo

to

kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1
commonLabel: foo

See also the noise added to the examples in #735

At the point where we say _let's make this an API object_ we can decide that any Kustomization missing the fields (but otherwise parsing OK) is grandfathered in with the proper GVK, and we already offer a tool to add the fields to Kustomization files (kustomize fix).

Does a door close if we don't require it immediately?

We must immediately decide which Group to use in apiVersion.

The group in apiVersion is not necessary. A similar case is kubectl configuration files where the apiVersion is v1 and the kind is Config.

User annoyance with no benefit (a Kustomization is not an API object, and there are no plans to make it one)

Agreed.

The group in apiVersion is not necessary. A similar case is kubectl configuration files where the apiVersion is v1 and the kind is Config.

For server types unspecified implies core which is definitely wrong. Not sure what this means for client only types.

Can we make the kind / version required in the kubectl version but defer it as required in kustomize so we don't break folks?

For prior art - the kubeconfig file has a Kind Config and version v1 (with no group).

https://github.com/kubernetes/kubernetes/blob/06e1af74ef39c2cc32a5b950adb8f6cc95e03007/staging/src/k8s.io/client-go/tools/clientcmd/api/types.go.

fyi, that predated the existence of API groups as a concept. today, new config types are being placed in a <component>.config.k8s.io API group

@liggitt Who defines what the component is? Is that sig-arch? The sig owning the piece?

Can we make the kind / version required in the kubectl version but defer it as required in kustomize so we don't break folks?

We can do this, but that will be an inconsistency between kustomize and kubectl kustomize.

We can do this, but that will be an inconsistency between kustomize and kubectl kustomize.

How big of a problem is that?

We can do this, but that will be an inconsistency between kustomize and kubectl kustomize.

How big of a problem is that?

kustomization.yaml works in kustomize doesn't work in kubectl kustomize when missing those fields. For local bases/overlays, users can add those fields manually or by kustomize edit fix. For remote bases that users don't have control, this will fail and they couldn't fix it.

@Liujingfang1 - good point re remote bases. we shouldn't worry about it because

  • We support remote repos as bases, but that doesn't mean we recommend that someone use a repo not under their control. The use case here is different teams in the same org; they are motivated to cooperate.
  • If the desired base is not an owned repo (i.e. something found via web search) then - as we tell people in our workflow example - the right way to use it is to fork its containing repo, make local edits as desired (e.g. one could add GVK), commit it locally and rebase from time to time. That way your config is always known to you.

If we're worried about the ResourceBuilder in kubectl apply -f complaining about kustomization.yaml files we could change that code path to ignore the file name [K|k]ustomization[.yaml|.yml], until if and when that code path is supposed to actively honor them.

So i'm for passively allowing the fields

kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1

erroring on any other values (if the fields are present), and not requiring the fields until we have a use case for the Kustomization as an API object, e.g. server-side apply uses them (a wild new use case).

Yeah, i'm for passively allowing the fields

kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1

erroring on any other values (if the fields are present), and not requiring the fields until we have a use case for the Kustomization as an API object.

Does this solution sound good to you @liggitt @pwittrock ?

We could mention this issue at the sig-cli meeting and see if anyone else has thoughts.

We're eager to put a new release out. We already have to go to v2 because of other changes.

But if we push v2 right now, and then later decide these fields should be required, we'd be faced with v3 :)

+1. Sending out an email to sig-cli saying it will be discussed at the next sig meeting and we plan to make a decision if we have lazy consensus at that time.

@liggitt Who defines what the component is?

it has typically is a form of the consuming binary name. config groups we have are:

apiserver.config.k8s.io
cloudcontrollermanager.config.k8s.io
kubelet.config.k8s.io
kubeproxy.config.k8s.io
kubescheduler.config.k8s.io
kubecontrollermanager.config.k8s.io

i'm for passively allowing the fields

kind: Kustomization
apiVersion: configuration.k8s.io/v1beta1

erroring on any other values (if the fields are present), and not requiring the fields until we have a use case for the Kustomization as an API object.

Does this solution sound good to you @liggitt @pwittrock ?

As an approach, yes. As a specific group name, kustomize.config.k8s.io fits the mold and seems reasonable if the command is going to continue to run as a distinct command

As an approach, yes. As a specific group name, kustomize.config.k8s.io fits the mold and seems reasonable if the command is going to continue to run as a distinct command

:+1: from me for kustomize.config.k8s.io and from start I was forcing the requirement for apiVersion and kind so :+1: on that too, especially that for kubectl users it'll be a new thing and as @monopole mentioned for existing users v2 is viable option. v1 can be with the warning.

Fine with kustomize.config.k8s.io.

Still haven't heard a user or developer benefit to _requiring_ these API fields in a non API object.

In favor of _allowing_ them to be present, and demanding that if present they have particular values, and that those particular values are the assumed values if the fields are missing. Everyone wins :)

documented the versioning policy here

Kustomize newbie here, had to do a fair bit of searching to understand why i can't use kubctl with kustomization.yaml. Saw quite a lot of articles of why kustomize is so much better than helm so it was even merged in kubectl.

RIght now I'm getting:

位 kubectl apply -f ./
configmap/the-map unchanged
error: unable to recognize "kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"

with

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

in my kustomization.yaml file. Is it still ok?

Thanks in advance!

@yellowmegaman Starting from 1.14, you can use kubectl apply -k ./.
Before 1.14 is released, you need to download the kustomize binary and run following command

kustomize build ./ | kubectl apply -f -

Thanks a bunch @Liujingfang1, awesome!

@Liujingfang1

Really baffled about getting

kubectl apply -f .

to work here, and am hoping you could give me a hand.

We are running v1.15.0:

$ kubectl version 
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Adding the following

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

to kustomization.yaml spits out

$ kubectl apply -f ./
...
unable to recognize "kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"

and removing the above returns

$ kubectl apply -f ./
error validating "kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

In addition, when creating a cluster role binding, it errs out with

Error from server (Invalid): error when creating "cluster-role-binding.yaml": ClusterRoleBinding.rbac.authorization.k8s.io "xxx-clusterrole-binding" is invalid: subjects[0].namespace: Required value

My understanding is that we don't need to specify the namespace for a ClusterRole?

Nevertheless, it works with

kubectl kustomize | kubectl apply -f -

@mapshen use kubectl apply -k ./ instead. -f doesn't work with kustomization.

Argh...everything worked like a charm with -k. Thanks so much for pointing this out!!! @liuhuiping2013

Was this page helpful?
0 / 5 - 0 ratings

Related issues

laupow picture laupow  路  4Comments

surki picture surki  路  4Comments

yujunz picture yujunz  路  5Comments

monopole picture monopole  路  3Comments

lionelvillard picture lionelvillard  路  4Comments