Kustomize: commonLabels creates volumeClaimTemplates in StatefulSet

Created on 4 Jan 2019  路  7Comments  路  Source: kubernetes-sigs/kustomize

Setting commonLabels on a StatefulSet creates a malformed volumeClaimTemplates populated with metadata containing the labels (it shouldn't have this).

Version: commit:746c7b0b5b6c7661c160cdeb62390ef0bb5ac4fe (Dec 30, 2018)

Steps to reproduce:

Run kustomize build ./

kustomization.yaml

commonLabels:
  api: crew.example.com
resources:
- manager/manager.yaml

manager/manager.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: controller-manager
  labels:
    control-plane: controller-manager
    controller-tools.k8s.io: "1.0"
spec:
  selector:
    matchLabels:
      control-plane: controller-manager
      controller-tools.k8s.io: "1.0"
  serviceName: controller-manager-service
  template:
    metadata:
      labels:
        control-plane: controller-manager
        controller-tools.k8s.io: "1.0"
    spec:
      containers:
      - command:
        - /manager
        image: controller:latest
        imagePullPolicy: Always
        name: manager
        env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: SECRET_NAME
            value: $(WEBHOOK_SECRET_NAME)
        resources:
          limits:
            cpu: 100m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        ports:
        - containerPort: 9876
          name: webhook-server
          protocol: TCP
        volumeMounts:
        - mountPath: /tmp/cert
          name: cert
          readOnly: true
      terminationGracePeriodSeconds: 10
      volumes:
      - name: cert
        secret:
          defaultMode: 420
          secretName: webhook-server-secret

output:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    api: crew.example.com
    control-plane: controller-manager
    controller-tools.k8s.io: "1.0"
  name: controller-manager
spec:
  selector:
    matchLabels:
      api: crew.example.com
      control-plane: controller-manager
      controller-tools.k8s.io: "1.0"
  serviceName: controller-manager-service
  template:
    metadata:
      labels:
        api: crew.example.com
        control-plane: controller-manager
        controller-tools.k8s.io: "1.0"
    spec:
      containers:
      - command:
        - /manager
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: SECRET_NAME
          value: $(WEBHOOK_SECRET_NAME)
        image: controller:latest
        imagePullPolicy: Always
        name: manager
        ports:
        - containerPort: 9876
          name: webhook-server
          protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        volumeMounts:
        - mountPath: /tmp/cert
          name: cert
          readOnly: true
      terminationGracePeriodSeconds: 10
      volumes:
      - name: cert
        secret:
          defaultMode: 420
          secretName: webhook-server-secret
  volumeClaimTemplates: # this shouldn't exist
    metadata:
      labels:
        api: crew.example.com
kinbug lifecyclrotten

Most helpful comment

As a workaround, if you specify volumeClaimTemplates: [] in the input document, Kustomize will leave it alone.

All 7 comments

Same issue as #504. @twz123 proposed a solution in #610

As a workaround, if you specify volumeClaimTemplates: [] in the input document, Kustomize will leave it alone.

:+1: this has a nasty side effect in kubectl too

(datascience:cluster-commons) (INFRA-139 $%=) giuseppe:datascience$ kubectl diff -k . -v=4
I0611 17:29:39.169314   24783 helpers.go:196] server response object: [{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "cannot restore slice from map",
  "code": 500
}]
F0611 17:29:39.169393   24783 helpers.go:114] Error from server: cannot restore slice from map

The suggested workaround solves the issue.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Liujingfang1 picture Liujingfang1  路  4Comments

karlmutch picture karlmutch  路  5Comments

mgoodness picture mgoodness  路  4Comments

wuestkamp picture wuestkamp  路  3Comments

sidps picture sidps  路  5Comments