Kustomize: volumeMount names being overwritten when using patchesStrategicMerge

Created on 18 Oct 2020  路  3Comments  路  Source: kubernetes-sigs/kustomize

Describe the bug

I have a deployment that has several volume mounts on the same path, one from a PV, one from a secret and one from a config map. I also have an overlay that is meant to inject an extra container that can perform backups of one of the volumes, so it too has a volume mount for the PV.

When I build the resources using kustomize build I find that in the output all the volume mount's names have been overwritten to an incorrect value.

Files that can reproduce the issue

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: home-assistant
  labels:
    app: home-assistant
spec:
  selector:
    matchLabels:
      app: home-assistant
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: home-assistant
    spec:
      containers:
      - image: homeassistant/home-assistant:latest
        name: home-assistant
        readinessProbe:
          httpGet:
            path: /
            port: 8123
          initialDelaySeconds: 5
        volumeMounts:
        - name: home-assistant
          mountPath: /config
        - name: home-assistant-config
          mountPath: /config
        - name: home-assistant-secret
          mountPath: /config
      volumes:
      - name: home-assistant-config
        configMap:
          name: home-assistant
      - name: home-assistant-secret
        secret:
          secretName: home-assistant
      - name: home-assistant
        persistentVolumeClaim:
          claimName: home-assistant

overlay.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: home-assistant
spec:
  template:
    spec:
      imagePullSecrets:
      - name: registry
      containers:
      - image: ghcr.io/davidsbond/homelab:latest
        name: volume-backup
        command:
        - /bin/volume-backup
        env:
        - name: BUCKET_DSN
          value: s3://volumes?endpoint=minio.storage.svc.cluster.local:9000&region=none&s3ForcePathStyle=true&disableSSL=true
        - name: BUCKET_DIR
          value: home-assistant
        - name: VOLUME_DIR
          value: /data
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              key: minio.access_key
              name: minio
        - name: AWS_SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: minio.secret_key
              name: minio
        - name: TRACER_DISABLED
          value: 'true'
        readinessProbe:
          httpGet:
            path: /__/health
            port: 8081
          initialDelaySeconds: 5
        volumeMounts:
        - name: home-assistant
          mountPath: /data

Expected output

output.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: home-assistant
  name: home-assistant
  namespace: utilities
spec:
  replicas: 1
  selector:
    matchLabels:
      app: home-assistant
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: home-assistant
    spec:
      containers:
      - image: homeassistant/home-assistant:latest
        name: home-assistant
        readinessProbe:
          httpGet:
            path: /
            port: 8123
          initialDelaySeconds: 5
        volumeMounts:
        - mountPath: /config
          name: home-assistant-config
        - mountPath: /config
          name: home-assistant-secret
        - mountPath: /config
          name: home-assistant
      - command:
        - /bin/volume-backup
        env:
        - name: BUCKET_DSN
          value: s3://volumes?endpoint=minio.storage.svc.cluster.local:9000&region=none&s3ForcePathStyle=true&disableSSL=true
        - name: BUCKET_DIR
          value: home-assistant
        - name: VOLUME_DIR
          value: /data
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              key: minio.access_key
              name: minio-9ft9m4chfm
        - name: AWS_SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: minio.secret_key
              name: minio-9ft9m4chfm
        - name: TRACER_DISABLED
          value: "true"
        image: ghcr.io/davidsbond/homelab:latest
        name: volume-backup
        readinessProbe:
          httpGet:
            path: /__/health
            port: 8081
          initialDelaySeconds: 5
        volumeMounts:
        - mountPath: /data
          name: home-assistant
      imagePullSecrets:
      - name: registry-9cdggtddk4
      volumes:
      - configMap:
          name: home-assistant-m99f77d5hh
        name: home-assistant-config
      - name: home-assistant-secret
        secret:
          secretName: home-assistant-t48272ct9t
      - name: home-assistant
        persistentVolumeClaim:
          claimName: home-assistant

Actual output

output.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: home-assistant
  name: home-assistant
  namespace: utilities
spec:
  replicas: 1
  selector:
    matchLabels:
      app: home-assistant
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: home-assistant
    spec:
      containers:
      - image: homeassistant/home-assistant:latest
        name: home-assistant
        readinessProbe:
          httpGet:
            path: /
            port: 8123
          initialDelaySeconds: 5
        volumeMounts:
        - mountPath: /config
          name: home-assistant
        - mountPath: /config
          name: home-assistant
        - mountPath: /config
          name: home-assistant
      - command:
        - /bin/volume-backup
        env:
        - name: BUCKET_DSN
          value: s3://volumes?endpoint=minio.storage.svc.cluster.local:9000&region=none&s3ForcePathStyle=true&disableSSL=true
        - name: BUCKET_DIR
          value: home-assistant
        - name: VOLUME_DIR
          value: /data
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              key: minio.access_key
              name: minio-9ft9m4chfm
        - name: AWS_SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: minio.secret_key
              name: minio-9ft9m4chfm
        - name: TRACER_DISABLED
          value: "true"
        image: ghcr.io/davidsbond/homelab:latest
        name: volume-backup
        readinessProbe:
          httpGet:
            path: /__/health
            port: 8081
          initialDelaySeconds: 5
        volumeMounts:
        - mountPath: /data
          name: home-assistant
      imagePullSecrets:
      - name: registry-9cdggtddk4
      volumes:
      - configMap:
          name: home-assistant-m99f77d5hh
        name: home-assistant-config
      - name: home-assistant-secret
        secret:
          secretName: home-assistant-t48272ct9t
      - name: home-assistant
        persistentVolumeClaim:
          claimName: home-assistant

Notice in the first volumeMounts all 3 mounts are using the same volume name.

Kustomize version

{Version:kustomize/v3.8.4 GitCommit:8285af8cf11c0b202be533e02b88e114ad61c1a9 BuildDate:2020-09-19T15:39:21Z GoOs:linux GoArch:amd64}

Platform

Linux, amd64

Additional context

The issue persists when I remove the volumeMounts from the overlay file. If I do not use the overlay at all the output is as expected

arekyaml areopenapi kinbug prioritimportant-longterm

Most helpful comment

@natasha41575 I think you're correct, I forgot to come back to this issue after solving my problem. What I should be doing instead is using subPath like so:

        volumeMounts:
        - mountPath: /config
          name: home-assistant
        - mountPath: /config/configuration.yaml
          name: home-assistant-config
          subPath: configuration.yaml
        - mountPath: /config/scripts.yaml
          name: home-assistant-config
          subPath: scripts.yaml
        - mountPath: /config/secrets.yaml
          name: home-assistant-secret
          subPath: secrets.yaml
        - mountPath: /config/automations.yaml
          name: home-assistant-config
          subPath: automations.yaml
        - mountPath: /config/groups.yaml
          name: home-assistant-config
          subPath: groups.yaml

This allows me to mount the files individually from the config map, and doesn't have the issue described above. It's probably more on the fault of kubectl for not warning me I was doing something wrong, as when I look at the actual state in kubernetes it has merged all those volume mounts together. Likely this issue can just be closed.

All 3 comments

This is the same as #2767. mountPath is the merge key and items with same merge key will be merged. We should have similar solution like #3111.

@monopole @natasha41575

volumeMounts does not have a variable x-kubernetes-list-map-keys, so it falls back on the variable x-kubernetes-patch-merge-key which is mountPath. The current schema doesn't allow multiple volumeMounts with the same mountPath and different names, so the solution for #3111 (and PR #3159) will not fix this issue.

@monopole @apelisse what do you think? Is this correct behavior?

@natasha41575 I think you're correct, I forgot to come back to this issue after solving my problem. What I should be doing instead is using subPath like so:

        volumeMounts:
        - mountPath: /config
          name: home-assistant
        - mountPath: /config/configuration.yaml
          name: home-assistant-config
          subPath: configuration.yaml
        - mountPath: /config/scripts.yaml
          name: home-assistant-config
          subPath: scripts.yaml
        - mountPath: /config/secrets.yaml
          name: home-assistant-secret
          subPath: secrets.yaml
        - mountPath: /config/automations.yaml
          name: home-assistant-config
          subPath: automations.yaml
        - mountPath: /config/groups.yaml
          name: home-assistant-config
          subPath: groups.yaml

This allows me to mount the files individually from the config map, and doesn't have the issue described above. It's probably more on the fault of kubectl for not warning me I was doing something wrong, as when I look at the actual state in kubernetes it has merged all those volume mounts together. Likely this issue can just be closed.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

lionelvillard picture lionelvillard  路  4Comments

monopole picture monopole  路  3Comments

surki picture surki  路  4Comments

nabadger picture nabadger  路  4Comments

TechnicalMercenary picture TechnicalMercenary  路  3Comments