Kustomize: left-over configmap with configMapGenerator

Created on 16 Nov 2019  Â·  7Comments  Â·  Source: kubernetes-sigs/kustomize

Hi there,

I have a configMapGenerator as below (see next kustomization.yaml). When I update any configuration files (conf/*.conf), the configMap will have new hashes, and applying new configuration (kustomize build | kubectl apply -f-) will ask the daemonet/deployment to use the new configmap, but

  1. The old configmap (with the old hash string) was not removed, I have to remove them manually
  2. New deployment is not triggered, I have to delete the deployment and deploy again to see new pods

I expect that after I deploy the app with new configuration , the pods would be restarted and the old configmap is removed. Am I getting or doing something wrong?

I'm using kustomize-3.4.0

Thanks a lot.

Daemonset.yaml

See https://gist.github.com/icy/2f778f0e15ffcfa6f9b00f23b297a2a8 (it's copied from fluentbit documentation with a tiny update)

kustomization.yaml

resources:
- fluent-bit-service-account.yaml
- fluent-bit-role.yaml
- fluent-bit-role-binding.yaml
- fluent-bit-ds.yaml

namespace: logging

configMapGenerator:
- name: fluent-bit-config
  files:
  - conf/filter-kubernetes.conf
  - conf/fluent-bit.conf
  - conf/input-kubernetes.conf
  - conf/output-elasticsearch.conf
  - conf/parsers.conf

Most helpful comment

@monopole I am also seeing configmaps produced by configmapgenerator build up and never be garbage-collected (well, 84 days and counting). I can't seem to find any documentation about this. As @glasser asked, is there any k8s-supported way of dealing with this? Or should we write cronjobs to identify disused configmaps and delete them?

All 7 comments

The daemonset was deployed with an old api version, which updateStrategy is not rollingUpdate. I have updated daemonset with new strategy and the daemonset update will trigger new pod deployment.

However, there are still left-over configmaps

This is WAI. The old configmaps should remain live in the cluster until garbage collectors in the cluster decides to get rid of them. it's a bit risky to manually modify or delete configmaps.

There's a side project to develop a general approach to prune-on-apply, but it's not ready for GA yet.

but just to be clear... there isn't actually a garbage collector that currently exists in k8s that can GC these automatically, right?

@monopole I am also seeing configmaps produced by configmapgenerator build up and never be garbage-collected (well, 84 days and counting). I can't seem to find any documentation about this. As @glasser asked, is there any k8s-supported way of dealing with this? Or should we write cronjobs to identify disused configmaps and delete them?

any way resolve this problem?

Add some special label to your resource, for example managed-by-kustomize="true", and then you can use kustomize build | kubectl apply --prune -f- -l managed-by-kustomize="true" to prune old resources . Haven't test it myself, please do more tests before use in production.

An alternative could be if you have marked all configMaps and secrets with label like currently-used-by-kustomize="true" from generatorOptions:

kustomize.yml generatorOptions: labels: currently-used-by-kustomize: "true"

  1. Switch configMap & secrets label currently-used-by-kustomize to false (example for configMaps):
    kubectl label configMaps -n <your-namespace> -l currently-used-by-kustomize="true" --overwrite currently-used-by-kustomize="false"
  2. Apply your config (I prefer the build in kustomize from kubectl):
    kubectl apply -k ./your-folder
  3. Delete old configs with:
    kubectl delete configMaps -n <your-namespace> -l currently-used-by-kustomize="false"

Pros

  • Always up to date configuration & secrets
  • Can be used for any ressources

Cons

  • Not fault tolerant so always execute commands in the right order in a process like Shell or Makefile, otherwise you can lose config and secrets
  • Not aware of the actual usage of ressources
  • Impacted ressources are marked as configured but are not rolling updated
Was this page helpful?
0 / 5 - 0 ratings

Related issues

wuestkamp picture wuestkamp  Â·  3Comments

pst picture pst  Â·  4Comments

nabadger picture nabadger  Â·  4Comments

Liujingfang1 picture Liujingfang1  Â·  4Comments

natasha41575 picture natasha41575  Â·  3Comments