I'm omitting files because they are not relevant to the issue I have.
I have set a "commonLabels" label "app: idp" but it seems to use it to overwite my networks settings "podSelector.matchlabels app: logstash" with the matchLabels entries.
I expected the matchLabels to be used in the metadata alone and not spread to network settings. If this is the expected behavior, what would be the recommended way to preserve the matchLabels defined in my network policies?
This is a partial representation of my setup.
idp/
|-- base/
| |-- kustomization.yaml
| |-- network.yaml
| |-- [...]
|-- overlay/ti/
|-- kustomization.yaml
|-- [...]
idp/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../../base
commonLabels:
app: idp
resources:
- network.yaml
- service.yaml
- deployment.yaml
idp/base/network.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-idp
spec:
podSelector:
matchLabels:
app: idp
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: logging
podSelector:
matchLabels:
app: logstash
ports:
- port: 5000
- to:
- ipBlock:
cidr: 10.64.8.40/32
ports:
- port: 443
overlay/ti/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: cba
commonLabels:
env: ti
patchesStrategicMerge:
- image.yaml
- replicas.yaml
- memory.yaml
kubectl get netpol allow-egress-foo -n cba -o yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"labels":{"app":"idp","env":"ti"},"name":"allow-egress-idp","namespace":"cba"},"spec":{"egress":[{"ports":[{"port":5000}],"to":[{"namespaceSelector":{"matchLabels":{"name":"logging"}},"podSelector":{"matchLabels":{"app":"idp","env":"ti"}}}]},{"ports":[{"port":443}],"to":[{"ipBlock":{"cidr":"10.64.8.40/32"}}]}],"podSelector":{"matchLabels":{"app":"idp","env":"ti"}},"policyTypes":["Egress"]}}
creationTimestamp: "2020-01-06T12:11:20Z"
generation: 3
labels:
app: idp
env: ti
name: allow-egress-idp
namespace: cba
resourceVersion: "7933785"
selfLink: /apis/networking.k8s.io/v1/namespaces/cba/networkpolicies/allow-egress-idp
uid: 8c07543a-b6a1-4448-a784-2093095d42bf
spec:
egress:
- ports:
- port: 5000
protocol: TCP
to:
- namespaceSelector:
matchLabels:
name: logging
podSelector:
matchLabels:
app: idp
env: ti
- ports:
- port: 443
protocol: TCP
to:
- ipBlock:
cidr: 10.64.8.40/32
podSelector:
matchLabels:
app: idp
env: ti
policyTypes:
- Egress
same behaviour for me
my kustomization.yaml file
commonLabels:
managed-by: foo
part-of: bar
resources:
- toto-networkpolicy.yaml
my toto-networkpolicy.yaml file
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: toto-network-policy
spec:
podSelector:
matchLabels:
app: toto-server
policyTypes:
- Egress
- Ingress
the command
kubectl kustomize test-toto-network-policy/
generates the yaml file
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
managed-by: foo
part-of: bar
name: toto-network-policy
spec:
podSelector:
matchLabels:
app: toto-server
managed-by: foo
part-of: bar
policyTypes:
- Egress
- Ingress
whereas I would have expect
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
managed-by: foo
part-of: bar
name: toto-network-policy
spec:
podSelector:
matchLabels:
app: toto-server
policyTypes:
- Egress
- Ingress
I have the same issue :(. The difference is that I have 2 levels of Kustomize: I have first a specification for a generic type of a NetworkPolicy, and the second level defines the specific instance.
It looks like this:
|-- base
| |-- deployment.yaml
| |-- network-policy.yaml // base NetworkPolicy
|-- overlays
| |-- xs
| | |-- kustomize.yaml // kustomize A
| | |-- network-policy.yaml // patch A
| |-- s
| | |-- kustomize.yaml and network-policy.yaml
| |-- ...
inventory
|-- xs-1
| |- kustomize.yaml // kustomize B
| |- network-policy.yaml // patch B
|-- s-1
|-- ...
My inventory defines my different instances.
My Kustomize version: {Version:kustomize/v3.5.4 GitCommit:3af514fa9f85430f0c1557c4a0291e62112ab026 BuildDate:2020-01-11T03:12:59Z GoOs:linux G
oArch:amd64}
The "base NetworkPolicy":
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: "access-postgres"
namespace: "product"
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: "postgres"
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: "pgadmin"
namespaceSelector:
matchLabels:
name: "admin"
The "Kustomize A":
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- "../../base"
patchesStrategicMerge:
- "network-policy.yaml"
nameSuffix: "-xs"
commonLabels:
product.com/type: "xs"
The "Patch A":
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: "access-postgres"
namespace: "product"
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: "postgres"
company.com/type: "xs"
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: "product"
company.com/type: "xs"
The "Kustomize B":
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- "../../definition/overlays/xs"
patchesStrategicMerge:
- "network-policy.yaml"
nameSuffix: "-1"
commonLabels:
product.com/instance: "1"
The "Patch B":
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: "access-postgres"
namespace: "product"
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: "postgres"
product.com/instance: "1"
I have this result for the NetworkPolicy part:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/name: postgres
product.com/instance: "1"
product.com/type: xs
name: access-postgres-xs-1
namespace: product
spec:
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: product
product.com/instance: "1"
product.com/type: xs
podSelector:
matchLabels:
app.kubernetes.io/name: postgres
product.com/instance: "1"
product.com/type: xs
where I was expecting (as a diff):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/name: postgres
product.com/instance: "1"
product.com/type: xs
name: access-postgres-xs-1
namespace: product
spec:
ingress:
+ - from:
+ - podSelector:
+ matchLabels:
+ app.kubernetes.io/name: pgadmin
+ namespaceSelector:
+ matchLabels:
+ name: "admin"
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: product
- product.com/instance: "1"
product.com/type: xs
podSelector:
matchLabels:
app.kubernetes.io/name: postgres
product.com/instance: "1"
product.com/type: xs
I don't know why the first - from in ingress disappears. And for the actual issue, the matchLabels in the result for the path /spec/ingress/0/from/0/podSelector/matchLabels has a product.com/instance: "1" added by the commonLabels configuration in "Kustomize B".
After digging into the sources, I found this is totally intended and not a kind of a bug.
commonlabels.go (#077c7b2d20 L135):
- path: spec/podSelector/matchLabels
create: false
group: networking.k8s.io
kind: NetworkPolicy
- path: spec/ingress/from/podSelector/matchLabels
create: false
group: networking.k8s.io
kind: NetworkPolicy
I don't think the specs are going to change for this feature.
@michaelkrupp in his comment (issue #1459) said:
This bug has been repeatedly reported over the last 6 months now, with every single issue going rotten without any clear statement from any maintainer.
I disagree in describing this as a bug, but the fact is that the maintainers didn't provide any workaround.
For now, I would advice to avoid using kustomize's commonLabels when you're also using podSelector.matchLabels and don't want to alter them.
This is more specific variant of issue https://github.com/kubernetes-sigs/kustomize/issues/157. In some settings it makes sense for commonLabels to be included in selectors, and in some settings it doers not make sense to include them in selectors. Kustomize includes by default, and there is no way to opt out.
There is a stupid workaround. Convert matchLabels to matchExpressions and Kustomize won't touch them. API docs
Example:
- podSelector:
matchLabels:
app: mongodb-backup
is equivalent with
- podSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-backup
and Kustomize will keep its hands off.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Still valid
/remove-lifecycle stale
Sorry for no response to this issue for a long time. @Sryther is correct, we cannot change the default configs just for this situation. However, you can explicitly use LabelTransformer instead of using commonLabels in kustomization file (which is a simplified version of LabelTransformer).
Let's use @salanfe 's example since it's simpler
# kustomization.yaml
transformers:
- label_transformer.yaml
resources:
- toto-networkpolicy.yaml
---
# label_transformer.yaml
apiVersion: builtin
kind: LabelTransformer
metadata:
name: notImportantHere
labels:
managed-by: foo
part-of: bar
fieldSpecs:
- kind: NetworkPolicy
path: metadata/labels
create: true
---
# toto-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: toto-network-policy
spec:
podSelector:
matchLabels:
app: toto-server
policyTypes:
- Egress
- Ingress
Result
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
managed-by: foo
part-of: bar
name: toto-network-policy
spec:
podSelector:
matchLabels:
app: toto-server
policyTypes:
- Egress
- Ingress
Note: The fieldSpecs: in label_transformer.yaml will entirely overwrite the default configs. So ONLY metadata/labels in NetworkPolicy will be updated. You need to add more field specs if you want the transformer to work for other resources.
The commonLabels field in the kustomization.yaml allows an abbreviated configuration of the LabelTransformer.
It has a builtin configuration to add labels and modify selectors , as pointed out in
https://github.com/kubernetes-sigs/kustomize/issues/2034#issuecomment-614549559
I disagree with @Sryther however. This example
https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configureBuiltinPlugin.md
was put up in November. It's not so much a workaround as an example of how to use the full power of plugins.
It shows how to specify the full configuration data for labeltransformer, to get detailed (and still declarative) control.
There are other examples. Here's a tricky one showing how to make a
custom transformer spec that's reusable in any number of kustomization files (like a base)
https://github.com/kubernetes-sigs/kustomize/blob/master/api/krusty/customconfigreusable_test.go#L54
Reopen if you have any questions; trying to help here.
A new feature request could be filed to propose a syntax to make this easier.
Could you use a patch? Patches can be applied to multiple resources.
Most helpful comment
This is more specific variant of issue https://github.com/kubernetes-sigs/kustomize/issues/157. In some settings it makes sense for
commonLabelsto be included in selectors, and in some settings it doers not make sense to include them in selectors. Kustomize includes by default, and there is no way to opt out.There is a stupid workaround. Convert
matchLabelstomatchExpressionsand Kustomize won't touch them. API docsExample:
is equivalent with
and Kustomize will keep its hands off.