Charts: [stable/prometheus-operator] customresourcedefinitions not cleaned on delete

Created on 10 Nov 2018  路  7Comments  路  Source: helm/charts

Version of Helm and Kubernetes:

> helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
> kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:38Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.3-eks", GitCommit:"58c199a59046dbf0a13a387d3491a39213be53df", GitTreeState:"clean", BuildDate:"2018-09-21T21:00:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
stable/prometheus-operator

What happened:
Reinstalling stable/prometheus-operator fails with the following error:
_object is being deleted: customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" already exists_

What you expected to happen:
I should be able to install / delete the chart as many times as I want.

How to reproduce it (as minimally and precisely as possible):

  • helm install stable/prometheus-operator --name
  • helm delete
  • helm install stable/prometheus-operator --name <-- this will fail with the above error

Anything else we need to know:
Notice that removing the chart doesn't clean all customresourcedefinitions which it created:

> helm delete <release name>
...
> kubectl get customresourcedefinitions
prometheuses.monitoring.coreos.com      2018-11-09T23:38:49Z
prometheusrules.monitoring.coreos.com   2018-11-09T23:38:49Z
servicemonitors.monitoring.coreos.com   2018-11-09T23:38:49Z

The installation will succeed once customresourcedefinitions are manually deleted:

> kubectl delete customresourcedefinitions prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" deleted

Most helpful comment

@alexsn, CRDs cannot currently able to be completely managed by Helm and must be installed using hooks - causing them to be unmanaged resources. Cleanup of CRDs will cause the relevant resources to be deleted and depending on how long it takes for the operator to act on the resulting change, can orphan stateful set resources.

The documentation for the chart https://github.com/helm/charts/tree/master/stable/prometheus-operator#uninstalling-the-chart has instructions on cleaning up resources for uninstalling it.

To handle the CI process in this repository, there is a _not recommended_ option to attempt cleanup on chart removal: https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L458 however, if this fails, you may be off in a worse situation with multiple stateful sets and pods left running after the release is deleted.

All 7 comments

@alexsn, CRDs cannot currently able to be completely managed by Helm and must be installed using hooks - causing them to be unmanaged resources. Cleanup of CRDs will cause the relevant resources to be deleted and depending on how long it takes for the operator to act on the resulting change, can orphan stateful set resources.

The documentation for the chart https://github.com/helm/charts/tree/master/stable/prometheus-operator#uninstalling-the-chart has instructions on cleaning up resources for uninstalling it.

To handle the CI process in this repository, there is a _not recommended_ option to attempt cleanup on chart removal: https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L458 however, if this fails, you may be off in a worse situation with multiple stateful sets and pods left running after the release is deleted.

I have the same problem i did
kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com

helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

@alexsn, CRDs cannot currently able to be completely managed by Helm and must be installed using hooks - causing them to be unmanaged resources. Cleanup of CRDs will cause the relevant resources to be deleted and depending on how long it takes for the operator to act on the resulting change, can orphan stateful set resources.

The documentation for the chart https://github.com/helm/charts/tree/master/stable/prometheus-operator#uninstalling-the-chart has instructions on cleaning up resources for uninstalling it.

To handle the CI process in this repository, there is a _not recommended_ option to attempt cleanup on chart removal: https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L458 however, if this fails, you may be off in a worse situation with multiple stateful sets and pods left running after the release is deleted.

Having the same issues with the latest operator. I did

kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com

and I get

kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" not found

But when I try to install it, I still get:

* helm_release.prometheus: 1 error(s) occurred:

* helm_release.prometheus: rpc error: code = Unknown desc = object is being deleted: customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" already exists

Any ideas why we get this?

@botzill The issue is still the same - helm can only handle CRDs as unmanaged resources. The error you're seeing comes from the fact that the CRD finalizer hasn't completed. I suggest you delete the resources, wait a little bit of time before trying again.

Helm 2.14 is meant to be better at dealing with the initial install of CRDs than the versions before it

Sorry, solved the issue now by deleting manually.

Why we don't delete them automatically when helm is removed?

I think this question was answered here: https://github.com/helm/charts/issues/9161#issuecomment-437619901

Having the same issues with the latest operator. I did

kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com

and I get

kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" not found

But when I try to install it, I still get:

* helm_release.prometheus: 1 error(s) occurred:

* helm_release.prometheus: rpc error: code = Unknown desc = object is being deleted: customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" already exists

Any ideas why we get this?

The command below actually worked for me:

kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com
Was this page helpful?
0 / 5 - 0 ratings