Kustomize: GC issues with kustomize-based workflows

Created on 9 Aug 2018  路  12Comments  路  Source: kubernetes-sigs/kustomize

There's an issue with unused k8s objects accumulating when using kustomize for a GitOps kind of workflow (but not limited to such cases I think).
In examples/combineConfigs.md, there's the following phrase:

A GC process in the k8s master eventually deletes unused configMaps.

This is not quite true. For example, I see this on my cluster where I use kustomize to deploy some stuff:

sc-scraper-5t97th7t2k             1         16d
sc-scraper-6h7g546ckt             1         19d
sc-scraper-744687ccfc             1         15d
sc-scraper-88kmmkfdk6             1         16d
sc-scraper-fh44t8758h             1         19d
sc-scraper-h72429g4d6             1         7d
sc-scraper-hbck7hdtht             1         19d
sc-scraper-hg446dfb4m             1         15d
sc-scraper-kbh8kbdbf6             1         14d
sc-scraper-kd98tc426k             1         15d

Configmaps / secrets with hashed names is one example, another being objects that are no longer
used in a new version of some app and thus were removed from the config repo.

What really needs to be done for k8s GC to work is setting ownerReferences in the objects' metadata, for example (see here for more info):

apiVersion: v1
kind: Service
metadata:
  name: wiki
  ownerReferences:
  - apiVersion: apps/v1
    kind: Deployment
    name: wiki
    uid: f491b0f0-2522-4b62-8f81-bde62999f825
spec:
  ports:
  - port: 80
    name: web
  selector:
    app: wiki

There's an obvious problem with this approach (kubernetes/kubernetes#66068): you can't set a fixed uid for a k8s object yet you must know the uid of the owner to attach the objects to be GC'd to it. This is not compatible with kustomize approach with first generating yaml via kustomize build and then applying it w/o getting any info from the cluster, so some compromises need to be made.

I frankly can't think of some pretty approach right now. Some of the workarounds people use for this kind of problem:

  • referencing a per-app CRD from each object (see here and here)
  • referencing a pre-created namespace

I'm not sure such a cleanup helper can be fully implemented within the bounds of kustomize, but some assistance on kustomize side is definitely required, such as making it possible to inject ownerReferences or at least just uids easily.

Thoughts?..

Most helpful comment

Should the documentation be updated?

The older configMap, when no longer referenced by any other resource, is eventually garbage collected.

This part is pretty confusing since its not working this way..

All 12 comments

There is KEP for garbage collection. Once this feature is available, those unused resources will be garbage collected.

Thanks for the pointer! Maybe it's worth linking from the kustomize docs?

Sounds good, I'll add it there.

Thanks!

Is this feature available in K8s cluster version 1.14?

@Liujingfang1 the linked KEP is closed as they all moved to https://github.com/kubernetes/enhancements but I can't find the corresponding KEP there. Do you happen to know if there's still something relevant open somewhere?

Thanks

The trail is cold here as far as I can see. Any update @Liujingfang1 ? Do generated objects persist forever at this point?

Would also like to know any solution/workaround for this?

I think the KEP never reached any consensus and died. There's no easy way to do this now.

So far I see two approached:

  1. Manually delete the extra configmaps once in a while
  2. If your kustomize covers everything in a namespace you can use kubectl apply --prune and resources not listed will be deleted. But of course you have to be careful with this one since it will delete anything not in the file, which is a bit unfortunate.

I think perhaps a more ideal solution would be for there to be a (custom) resource that kustomize creates which remembers what it created before. When you apply a new configuration to that resource it can do a diff and delete anything that was removed.

I found kubectl apply --prune is quite limited and error prone. I am using kapp now, which does a really nice job in terms of k8s app deployment and management

Should the documentation be updated?

The older configMap, when no longer referenced by any other resource, is eventually garbage collected.

This part is pretty confusing since its not working this way..

Can this issue be reopened? It's a big gap in kustomize for practical usage. I also found prune to be very limited and not really production usable.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

davidsbond picture davidsbond  路  3Comments

TechnicalMercenary picture TechnicalMercenary  路  3Comments

bugbuilder picture bugbuilder  路  3Comments

monopole picture monopole  路  3Comments

bcbrockway picture bcbrockway  路  5Comments