Hi there,
I have a configMapGenerator as below (see next kustomization.yaml). When I update any configuration files (conf/*.conf), the configMap will have new hashes, and applying new configuration (kustomize build | kubectl apply -f-) will ask the daemonet/deployment to use the new configmap, but
I expect that after I deploy the app with new configuration , the pods would be restarted and the old configmap is removed. Am I getting or doing something wrong?
I'm using kustomize-3.4.0
Thanks a lot.
See https://gist.github.com/icy/2f778f0e15ffcfa6f9b00f23b297a2a8 (it's copied from fluentbit documentation with a tiny update)
resources:
- fluent-bit-service-account.yaml
- fluent-bit-role.yaml
- fluent-bit-role-binding.yaml
- fluent-bit-ds.yaml
namespace: logging
configMapGenerator:
- name: fluent-bit-config
files:
- conf/filter-kubernetes.conf
- conf/fluent-bit.conf
- conf/input-kubernetes.conf
- conf/output-elasticsearch.conf
- conf/parsers.conf
The daemonset was deployed with an old api version, which updateStrategy is not rollingUpdate. I have updated daemonset with new strategy and the daemonset update will trigger new pod deployment.
However, there are still left-over configmaps
This is WAI. The old configmaps should remain live in the cluster until garbage collectors in the cluster decides to get rid of them. it's a bit risky to manually modify or delete configmaps.
There's a side project to develop a general approach to prune-on-apply, but it's not ready for GA yet.
but just to be clear... there isn't actually a garbage collector that currently exists in k8s that can GC these automatically, right?
@monopole I am also seeing configmaps produced by configmapgenerator build up and never be garbage-collected (well, 84 days and counting). I can't seem to find any documentation about this. As @glasser asked, is there any k8s-supported way of dealing with this? Or should we write cronjobs to identify disused configmaps and delete them?
any way resolve this problem?
Add some special label to your resource, for example managed-by-kustomize="true", and then you can use kustomize build | kubectl apply --prune -f- -l managed-by-kustomize="true" to prune old resources . Haven't test it myself, please do more tests before use in production.
An alternative could be if you have marked all configMaps and secrets with label like currently-used-by-kustomize="true" from generatorOptions:
kustomize.yml
generatorOptions:
labels:
currently-used-by-kustomize: "true"
currently-used-by-kustomize to false (example for configMaps):kubectl label configMaps -n <your-namespace> -l currently-used-by-kustomize="true" --overwrite currently-used-by-kustomize="false"kubectl apply -k ./your-folderkubectl delete configMaps -n <your-namespace> -l currently-used-by-kustomize="false"Pros
Cons
Most helpful comment
@monopole I am also seeing configmaps produced by configmapgenerator build up and never be garbage-collected (well, 84 days and counting). I can't seem to find any documentation about this. As @glasser asked, is there any k8s-supported way of dealing with this? Or should we write cronjobs to identify disused configmaps and delete them?