When a deployment has a container with two differently-named ports that match on the containerPort, a PatchTransformer alters the name of the second port so that it matches the first.
{Version:kustomize/v3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T14:08:42Z GoOs:linux GoArch:amd64}
and
{Version:kustomize/v3.8.1 GitCommit:0b359d0ef0272e6545eda0e99aacd63aef99c4d0 BuildDate:2020-07-16T00:58:46Z GoOs:linux GoArch:amd64}
kustomization.yaml
resources:
- resources.yaml
transformers:
- transformers.yaml
resources.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
template:
spec:
containers:
- image: test-image
name: test-deployment
ports:
- containerPort: 8080
name: take-over-the-world
protocol: TCP
- containerPort: 8080
name: disappearing-act
protocol: TCP
transformers.yaml
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: test-transformer
patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-transformer
labels:
test-transformer: did-my-job
target:
kind: Deployment
name: test-deployment
This is the behavior seen in v3.7.0. The two port entries have distinct names, reflecting their definitions in resources.yaml.
$ ./kustomize-v3.7.0 version
{Version:kustomize/v3.7.0 GitCommit:42d1f7b792a80af34828ec4c89af99e1043351a7 BuildDate:2020-07-04T19:15:46Z GoOs:linux GoArch:amd64}
$ ./kustomize-v3.7.0 build .
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
test-transformer: did-my-job
name: test-deployment
spec:
template:
spec:
containers:
- image: test-image
name: test-deployment
ports:
- containerPort: 8080
name: take-over-the-world
protocol: TCP
- containerPort: 8080
name: disappearing-act
protocol: TCP
$
This is the behavior seen in v3.8.0 and v3.8.1. The two port entries have matching names.
$ ./kustomize-v3.8.1 version
{Version:kustomize/v3.8.1 GitCommit:0b359d0ef0272e6545eda0e99aacd63aef99c4d0 BuildDate:2020-07-16T00:58:46Z GoOs:linux GoArch:amd64}
$ ./kustomize-v3.8.1 build .
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
test-transformer: did-my-job
name: test-deployment
spec:
template:
spec:
containers:
- image: test-image
name: test-deployment
ports:
- containerPort: 8080
name: take-over-the-world
protocol: TCP
- containerPort: 8080
name: take-over-the-world
protocol: TCP
$
Thanks again for regression reports!
We'd not need them if we had tests in place for these cases.
e.g. a test in sigs.k8s.io/kustomize/plugin/builtin/patchtransformer/PatchTransformer_test.go
any chance you could add a test that defines the current behavior? then a fix can change that.
happy to contribute some; i've asked before and not gotten an answer - if something is broken, is it OK to submit a test that currently "fails", but will "pass" when the issue it is a test for is corrected. its a chicken-egg question :-)
my A/B test is to submit two different tests (for two different sets of problems). one "passes", but documents current broken behaviour, the other fails because it expects the correct answer.
i will happily fix the polarity of the incorrect one - i suspect you'll prefer https://github.com/kubernetes-sigs/kustomize/pull/2777 and reject https://github.com/kubernetes-sigs/kustomize/pull/2776 until it documents the current broken results - plz advise
Sorry, was out last week.
It's fine and desirable to merge a test that passes with undesirable (issue related) behavior.
That way when a purported fix goes in, it will have to edit the tests in the same PR, making the impact of the PR clear, and providing regression coverage pointing to an issue discussing the behavior.
I am not sure is the current behavior expected. From k8s api doc, ports should be merged according to value of containerPort. However, I don't find exact definition of how to merge. What I can find is from Merging Lists of Objects section on this page
Merge Key: The patch merge key is used to identify same elements in a list. Unlike map elements (keyed by key) and object fields (keyed by field name), lists don't have a built-in merge identity for elements (index does not define identity). Instead an object field is used as a synthetic key/value for merging elements. This fields is the patch merge key. List elements with the same patch merge key will be merged when lists are merged.
At least in kyaml implementation, it will merge ports with same containerPort(containerPort is the merge key). Actually what it does is not replacing name, but replacing entire port definition object for the later one in ports list.
My understanding is that k8s supports multiple port definitions that have matching containerPort values. With that, I am surprised that the patch merge key is the containerPort, since that key is at odds with the notion of multiple port definitions.
I'm also surprised that a patch that applies a label would alter the Deployment's ports.
Applying patch is now implemented by merging two resources. Merging ports is an implicit step of merge.
I follow your explanation about the technical reason for the behavior. I still find the port merge surprising. The patch is for a label; the ports that are getting merged are from a single resource. For me, the port merge feels more like an unexpected side effect than it does an intentional modification.
@monopole What do you think about this problem? If there are multiple container ports that have same port number in the input file, kyaml merge2 will implicitly merge these ports regardless what the patch is. This affects all codes that call merge2 in kyaml.
Not only ports, I think all objects that can be merged will be merged implicitly.
What about cases with multiple protocols that use the same port number (an example from consul)? Is there a workaround?
- containerPort: 8301
name: serflan-tcp
protocol: TCP
- containerPort: 8301
name: serflan-udp
protocol: UDP
@ephesused Sorry currently there is no work around. We are working on a quick fix.
Is there any update on this issue? I am also running into this for the consul helm chart - it's not just the port name that gets changed, but the protocol as well.
{Version:kustomize/v3.8.2 GitCommit:e2973f6ecc9be6187cfd5ecf5e180f842249b3c6 BuildDate:2020-08-29T17:44:01Z GoOs:linux GoArch:amd64}
client-daemonset.yaml
# Source: consul/templates/client-daemonset.yaml
# DaemonSet to run the Consul clients on every node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: consul
namespace: consul-k8s
labels:
app: consul
chart: consul-helm
heritage: Helm
release: kubestack
spec:
selector:
matchLabels:
app: consul
chart: consul-helm
release: kubestack
component: client
hasDNS: "true"
template:
metadata:
labels:
app: consul
chart: consul-helm
release: kubestack
component: client
hasDNS: "true"
annotations:
"consul.hashicorp.com/connect-inject": "false"
"consul.hashicorp.com/config-checksum": ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: consul-client
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: consul-client-config
containers:
- name: consul
image: "consul:1.8.2"
env:
- name: ADVERTISE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: GOSSIP_KEY
valueFrom:
secretKeyRef:
name: consul-encryption-secret
key: encryption-secret
command:
- "/bin/sh"
- "-ec"
- |
CONSUL_FULLNAME="consul"
exec /bin/consul agent \
-node="${NODE}" \
-advertise="${ADVERTISE_IP}" \
-bind=0.0.0.0 \
-client=0.0.0.0 \
-node-meta=pod-name:${HOSTNAME} \
-hcl='leave_on_terminate = true' \
-hcl='ports { grpc = 8502 }' \
-config-dir=/consul/config \
-datacenter=prod-us-central1 \
-data-dir=/consul/data \
-encrypt="${GOSSIP_KEY}" \
-retry-join="provider=gce tag_value=prod-us-central1-consul-cluster zone_pattern=us-central1-a" \
-domain=consul
volumeMounts:
- name: data
mountPath: /consul/data
- name: config
mountPath: /consul/config
ports:
- containerPort: 8500
hostPort: 8500
name: http
- containerPort: 8502
hostPort: 8502
name: grpc
- containerPort: 8301
protocol: "TCP"
name: serflan-tcp
- containerPort: 8301
protocol: "UDP"
name: serflan-udp
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
- containerPort: 8600
name: dns-tcp
protocol: "TCP"
- containerPort: 8600
name: dns-udp
protocol: "UDP"
readinessProbe:
# NOTE(mitchellh): when our HTTP status endpoints support the
# proper status codes, we should switch to that. This is temporary.
exec:
command:
- "/bin/sh"
- "-ec"
- |
curl http://127.0.0.1:8500/v1/status/leader \
2>/dev/null | grep -E '".+"'
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- client-daemonset.yaml
patchesStrategicMerge:
- toleration-client.yaml
toleration-client.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: consul
namespace: consul-k8s
spec:
template:
spec:
tolerations:
- key: cloud.google.com/gke-preemptible
operator: Equal
value: "true"
effect: NoSchedule
kustomize build output
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: consul
chart: consul-helm
heritage: Helm
release: kubestack
name: consul
namespace: consul-k8s
spec:
selector:
matchLabels:
app: consul
chart: consul-helm
component: client
hasDNS: "true"
release: kubestack
template:
metadata:
annotations:
consul.hashicorp.com/config-checksum: ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356
consul.hashicorp.com/connect-inject: "false"
labels:
app: consul
chart: consul-helm
component: client
hasDNS: "true"
release: kubestack
spec:
containers:
- command:
- /bin/sh
- -ec
- |
CONSUL_FULLNAME="consul"
exec /bin/consul agent \
-node="${NODE}" \
-advertise="${ADVERTISE_IP}" \
-bind=0.0.0.0 \
-client=0.0.0.0 \
-node-meta=pod-name:${HOSTNAME} \
-hcl='leave_on_terminate = true' \
-hcl='ports { grpc = 8502 }' \
-config-dir=/consul/config \
-datacenter=prod-us-central1 \
-data-dir=/consul/data \
-encrypt="${GOSSIP_KEY}" \
-retry-join="provider=gce tag_value=prod-us-central1-consul-cluster zone_pattern=us-central1-a" \
-domain=consul
env:
- name: ADVERTISE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: GOSSIP_KEY
valueFrom:
secretKeyRef:
key: encryption-secret
name: consul-encryption-secret
image: consul:1.8.2
name: consul
ports:
- containerPort: 8500
hostPort: 8500
name: http
- containerPort: 8502
hostPort: 8502
name: grpc
- containerPort: 8301
name: serflan-tcp
protocol: TCP
- containerPort: 8301
name: serflan-tcp
protocol: TCP
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
- containerPort: 8600
name: dns-tcp
protocol: TCP
- containerPort: 8600
name: dns-tcp
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -ec
- |
curl http://127.0.0.1:8500/v1/status/leader \
2>/dev/null | grep -E '".+"'
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /consul/data
name: data
- mountPath: /consul/config
name: config
serviceAccountName: consul-client
terminationGracePeriodSeconds: 10
tolerations:
- effect: NoSchedule
key: cloud.google.com/gke-preemptible
operator: Equal
value: "true"
volumes:
- emptyDir: {}
name: data
- configMap:
name: consul-client-config
name: config
Top of the list https://github.com/kubernetes-sigs/kustomize/projects/2
@natasha41575
My nightly comparison runs show a recent behavior change at master for this scenario. The change arrived sometime between 1d4b3fa3 and 460c5406.
I'll investigate further and post back in a little while.
With recent master runs (such as in 460c5406), the test case behavior now results in one of the two port definitions getting eliminated:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
test-transformer: did-my-job
name: test-deployment
spec:
template:
spec:
containers:
- image: test-image
name: test-deployment
ports:
- containerPort: 8080
name: disappearing-act
protocol: TCP
@ephesused Yes I expect that result. Since the merge key problem hasn't been fixed. @natasha41575 is working on a solution with modified schema.
Fixed by https://github.com/kubernetes-sigs/kustomize/pull/3065
we're working on a process to update the openapi data at the core of this issue
Thanks! My nightly run (which used 0a1fde1e) showed the fix is in place and working as expected.
For posterity...
With the arrival of #3159, the test case now drops disappearing-act. The behavior is expected, given the discussions and the corresponding implementation. The test case results now are:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
test-transformer: did-my-job
name: test-deployment
spec:
template:
spec:
containers:
- image: test-image
name: test-deployment
ports:
- containerPort: 8080
name: take-over-the-world
protocol: TCP