The CHANGELOG.md now contains detailed instructions relating to PVCs and the PVs they are provided by the cloud provider. It is probably a good practice to not delete the associated PV whenever the PVC is deleted, but this is the current reclaimPolicy they have.
I suggest we add something like the following somewhere in the guide
To ensure the hub's and users' data won't be lost, you could choose to do the
following. This will influence not only your namespace, but the full cluster
though.
```sh
kubectl get storageclass
# 2. Save its name
DEFAULT_STORAGE_CLASS=
# 3. Save and modify the storage class
kubectl get storageclass $DEFAULT_STORAGE_CLASS -o yaml | sed -E 's/reclaimPolicy: Delete/reclaimPolicy: Retain/g' > /tmp/updated_storageclass.yaml
# 4. Delete and create the modified storage class
kubectl delete storageclass $DEFAULT_STORAGE_CLASS
kubectl create -f /tmp/updated_storageclass.yaml
```
What is the default reclaimPolicy in the provided storageClass in:
deletedelete ?delete ?I think setting the reclaim policy to something other than 'delete' is almost never desirable. I imagine most folks deleting resources via kubernetes would expect them to actually go away, without having to shift over to another API in order to delete the same thing again.
@minrk do you mean the shift over from helm to kubectl?
Until I learned about PVCs, PVs, StorageClass and a lot of properties, I was unable to guess what to expect properly. I remember reading the instructions of how to ensure we delete Disks in the tear down instructions for GKE that we have in the guide at some point, but never understood that the cloud provider automatically performs cleanup of created PVs if the PVC is deleted.
I think it is a good fail safe that many may want to have if they happen to for example delete the namespace where the PVCs reside or they do a Helm purge etc. I'm happy to have this under an advanced section or similar of the guide though, as it seems most relevant for bigger usage.
With a policy of retain:
The chart creates PVCs for the users, and the cloud provider provisions PVs for the PVCs. We remove the PVCs, and the PVs will remain until we use kubectl manually to delete them, or do it in another way such as google cloud console -> disks.
Does the user culler delete the PVCs btw? With user culling this is a lot less suitable.
With more experience, I now think having a delete reclaimPolicy is a good default, and it is out of scope for this chart to suggest a cluster wide policy change.
Most helpful comment
I think setting the reclaim policy to something other than 'delete' is almost never desirable. I imagine most folks deleting resources via kubernetes would expect them to actually go away, without having to shift over to another API in order to delete the same thing again.