Kustomize: Allow excluding some label selectors from commonLabels

Created on 10 Jul 2018  路  24Comments  路  Source: kubernetes-sigs/kustomize

I thought about this while implementing https://github.com/kubernetes-sigs/kustomize/pull/140. Sometimes you will have label selectors that you don't want Kustomize to make more specific by adding the commonLabels.

The example use case for me is NetworkPolicy. I want to allow access to my application pods from my ingress controllers, but the ingress controllers are deployed separately, since they are part of the cluster infrastructure. Adding an application-specific label to those NetworkPolicy label selectors might prevent the ingress controllers from accessing my pods.

What I imagine would be a new configuration option, like this:

unmodifiedLabelPaths:
- objref:
    kind: NetworkPolicy
    name: my-application
  fieldref:
    fieldpath: spec.ingress.from.podSelector.matchLabels

Without this, the alternatives would be:

  • use lots of patches instead of commonLabels to apply my extra labels to everything but the areas I don't want
  • have a separate NetworkPolicy resource outside of the Kustomization, which is unwieldy
  • potentially generic transformation https://github.com/kubernetes-sigs/kustomize/issues/91
kindocumentation lifecyclrotten

Most helpful comment

I don't think commonLabels should add to ANY selector. apps/v1 doesn't allow changes to selectors, and changes to deployments/ds in extensions will cause orphaned daemon sets. It should be opt in to add label selectors for common labels as if you don't know what you're doing you're very liable to end up with a lot of orphaned resources, or unable to deploy at all.

All 24 comments

In #140, you added the path spec.ingress.from.podSelector.matchLabels. What if you remove this path in your PR?

@Liujingfang1 I think it's more consistent if Kustomize treats all selectors the same and lets you configure exclusions. Regarding NetworkPolicy, I might have one that refers to pods in the same kustomization, like several components of an application working together, or I might have selectors that refer to pods outside. Kustomize cannot know which one I mean but it would be good to support both use cases.

I just realized that in the context of NetworkPolicy at least, I can use matchExpressions instead of matchLabels. I'm not sure if that's the way to go in general.

I just ran into this exact issue after upgrading kustomize to 1.0.5 (I was a little behind and running 1.0.3). I had a NetworkPolicy in a kustomization with customLabels set. The NetworkPolicy was targeting pods created outside of this kustomization. On 1.0.3, commonLabels did not affect the spec.podSelector.matchLabels nor the spec.ingress.from.podSelector.matchLabels. But, on 1.0.5, those selectors were updated with the commonLabels and no longer matched the pods the policy was supposed to apply to (the pods do not have the same labels). This effectively disabled the NetworkPolicy.

I don't know what the best solution is but I agree that it would be nice if we had a little more control here.

@mxey @ryane You can solve this problem by put your kustomization in a different layout. You can create another overlay on top your current kustomization. Put the network policy as a resource in this new overlay. The commonLabels is not added here.

@Liujingfang1 I see two problems with that.

  1. I still want the labels on the network policy itself, so I can find it again, and for pruning.
  2. Adding ever more kustomization levels increases the complexity.

@mxey @ryane Agree that the tradeoff is more complexity in the kustomization directives and implementation, or more complexity in the file system layout.

We just have to see what happens in practice. If folks commonly have a set of resources that should be labelled, and another set that shouldn't, then we're just talking about two subdirectories. To add/remove labels, you just move the resource from one directory to the other.

Or yes, we could add more complexity to labelling directives, and annotations, etc. Lists of group, version, kind, name, fieldpath etc.

Would like more feedback. Once new directives are offered, its hard/impossible to remove them.

I've run into this problem as well but with Deployments. Because a Deployment spec.selector.matchLabels is an immutable field commonlabels can't be changed. One use case of changing common labels over time might be to have one that describes who to call if this particular Deployment is causing trouble or what version of the app is being deployed.

Perhaps spec.selector.* should be completely ommitted from commonLabels or perhaps what @mxey suggests is the better way to go? In any case it is a problem that commonLabels can't be changed over time.

We will start a design shortly for a new approach of declaring path configs for different transformers. The desire of excluding some filed when performing one transformer will be included in this design. The design is tracked by https://github.com/kubernetes-sigs/kustomize/issues/91

We changed pathConfig to fieldSpec. Need to document how fieldSpec can work with this.

Does fieldSpec only match by GVK, or would it allow distinguishing resources based on their name too?

I don't think commonLabels should add to ANY selector. apps/v1 doesn't allow changes to selectors, and changes to deployments/ds in extensions will cause orphaned daemon sets. It should be opt in to add label selectors for common labels as if you don't know what you're doing you're very liable to end up with a lot of orphaned resources, or unable to deploy at all.

I also think commonLabels should NEVER be added to ANY selector, just like Benjamintf1 stated.
For me however, the simple reason is that commonLabels are for the resources you want to deploy.
Selectors are meant to express which resource to connect to, so labels point to foreign objects.
Adding commonLabels to selectors is simply a bug because it tempers with a given specification.

Compare this to the following example which is conceptually identical. Imagine that the commonLabels directive would add tags to SQL statements.

commonLabels:
myTag: myValue
sql:
SELECT a, b, c FROM mytable

in this example, the sql statement as such is the resource under our auspieces and it should receive a label. A possible result could be:
SELECT /* myTag: myLabel */ a, b, c FROM mytable

But would it make sense if the labels modified the name of the target table "mytable" ?
Obviously not because that table is not our resource, not owned by us and it has been given a name/identity that is fix. By rewriting this name into "mytable_myTag=myValue" we would end up referencing a table that does not exist.
With kubernetes, its the same story. The selector is just a pointer to a foreign resource which by all intends and purposes has nothing to do with the resources which we are controlling in our current scope.

The selector is just a pointer to a foreign resource which by all intends and purposes has nothing to do with the resources which we are controlling in our current scope.

I disagree. A selector can point either to the resources that are in the Kustomization scope or those that are not. Both are valid options. If Kustomize did not extend the selectors, it could not be used to deploy multiple overlays in the same namespace.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Hello,

I'm facing the same issue of matchLabels interferring with my networkpolicy definition and I'm confused by the state of things. I see they are still targeted indeed https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/config/defaultconfig/commonlabels.go#L148 But I'm wondering if there is a workaround or if I have to basically drom commonLabels latogether?

Thanks,

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

How is this rotten?

What ??

The robot changed the lifecycle automatically but the label is a bit harsh :)

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/reopen :)

@nlamirault: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings