My rook version is 1.1.7
I am testing rook/ceph with bluestore, and clean up the cluster for serveral times,and I do the https://rook.io/docs/rook/v1.1/ceph-teardown.html process
And I can delete the rook-ceph namespace ,but every time I rebuild the cluster,I found
cephcluster is still there
and the pre cluster config is auto working after I apply comon.yaml
I try to delete the crd, use
kubectl -n rook-ceph patch crd cephclusters.ceph.rook.io --type merge -p '{"metadata":{"finalizers": [null]}}'
or restart my node, but none of these works.
Is there a way to force delete cephcluster ?
I think some resources are remain in /var/lib/rook. You would remove the directory in all kubernetes nodes. (and volumegroup and logicalvolume if it exists)
I delete these files
I think some resources are remain in /var/lib/rook. You would remove the directory in all kubernetes nodes. (and volumegroup and logicalvolume if it exists)
=====================================
Finnally, I found a way to delete this:
I use
kubectl -n rook-ceph edit cephcluster
and then I delete all contents in nodes section,and delete the cluster again
This works.
your solution did not work for me however i figure out that finalizer may also block the deletion of the ressource :
kubectl -n rook-ceph get cephclusters.ceph.rook.io rook-ceph -o json
{
"apiVersion": "ceph.rook.io/v1",
"kind": "CephCluster",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"ceph.rook.io/v1\",\"kind\":\"CephCluster\",\"metad ata\":{\"annotations\":{},\"name\":\"rook-ceph\",\"namespace\":\"rook-ceph\"},\"spec\":{\"annotations\":null,\"cephVersion\":{\"allow Unsupported\":false,\"image\":\"ceph/ceph:v14.2.10\"},\"cleanupPolicy\":{\"confirmation\":\"\"},\"continueUpgradeAfterChecksEvenIfNot Healthy\":false,\"crashCollector\":{\"disable\":false},\"dashboard\":{\"enabled\":true,\"ssl\":true},\"dataDirHostPath\":\"/var/lib/r ook\",\"disruptionManagement\":{\"machineDisruptionBudgetNamespace\":\"openshift-machine-api\",\"manageMachineDisruptionBudgets\":fal se,\"managePodBudgets\":false,\"osdMaintenanceTimeout\":30},\"mgr\":{\"modules\":[{\"enabled\":true,\"name\":\"pg_autoscaler\"}]},\"m on\":{\"allowMultiplePerNode\":true,\"count\":3},\"monitoring\":{\"enabled\":false,\"rulesNamespace\":\"rook-ceph\"},\"network\":null ,\"placement\":{\"all\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpress ions\":[{\"key\":\"role\",\"operator\":\"In\",\"values\":[\"storage-node\"]}]}]}}}},\"rbdMirroring\":{\"workers\":0},\"removeOSDsIfOu tAndSafeToRemove\":false,\"resources\":null,\"skipUpgradeChecks\":false,\"storage\":{\"config\":null,\"nodes\":[{\"devices\":[{\"name \":\"vda\"}],\"name\":\"k8s-master-3\"}],\"useAllDevices\":false,\"useAllNodes\":false}}}\n"
},
"creationTimestamp": "2020-07-15T02:52:42Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-07-15T12:32:16Z",
"finalizers": [
"cephcluster.ceph.rook.io" <--- thats what blocks the deletion
],
"generation": 70,
"managedFields": [
...
you need to patch the ressource :
kubectl -n rook-ceph patch cephclusters.ceph.rook.io rook-ceph --type merge -p '{"metadata":{"finalizers": [null]}}'
and then delete crd
Most helpful comment
your solution did not work for me however i figure out that finalizer may also block the deletion of the ressource :
kubectl -n rook-ceph get cephclusters.ceph.rook.io rook-ceph -o json { "apiVersion": "ceph.rook.io/v1", "kind": "CephCluster", "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"ceph.rook.io/v1\",\"kind\":\"CephCluster\",\"metad ata\":{\"annotations\":{},\"name\":\"rook-ceph\",\"namespace\":\"rook-ceph\"},\"spec\":{\"annotations\":null,\"cephVersion\":{\"allow Unsupported\":false,\"image\":\"ceph/ceph:v14.2.10\"},\"cleanupPolicy\":{\"confirmation\":\"\"},\"continueUpgradeAfterChecksEvenIfNot Healthy\":false,\"crashCollector\":{\"disable\":false},\"dashboard\":{\"enabled\":true,\"ssl\":true},\"dataDirHostPath\":\"/var/lib/r ook\",\"disruptionManagement\":{\"machineDisruptionBudgetNamespace\":\"openshift-machine-api\",\"manageMachineDisruptionBudgets\":fal se,\"managePodBudgets\":false,\"osdMaintenanceTimeout\":30},\"mgr\":{\"modules\":[{\"enabled\":true,\"name\":\"pg_autoscaler\"}]},\"m on\":{\"allowMultiplePerNode\":true,\"count\":3},\"monitoring\":{\"enabled\":false,\"rulesNamespace\":\"rook-ceph\"},\"network\":null ,\"placement\":{\"all\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpress ions\":[{\"key\":\"role\",\"operator\":\"In\",\"values\":[\"storage-node\"]}]}]}}}},\"rbdMirroring\":{\"workers\":0},\"removeOSDsIfOu tAndSafeToRemove\":false,\"resources\":null,\"skipUpgradeChecks\":false,\"storage\":{\"config\":null,\"nodes\":[{\"devices\":[{\"name \":\"vda\"}],\"name\":\"k8s-master-3\"}],\"useAllDevices\":false,\"useAllNodes\":false}}}\n" }, "creationTimestamp": "2020-07-15T02:52:42Z", "deletionGracePeriodSeconds": 0, "deletionTimestamp": "2020-07-15T12:32:16Z", "finalizers": [ "cephcluster.ceph.rook.io" <--- thats what blocks the deletion ], "generation": 70, "managedFields": [ ...you need to patch the ressource :
kubectl -n rook-ceph patch cephclusters.ceph.rook.io rook-ceph --type merge -p '{"metadata":{"finalizers": [null]}}'and then delete crd