After running oc cluster down secret mount points remain mounted in the --host-volumes-dir
oc v1.5.0-alpha.1+71d3fa9
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO
oc cluster up --host-volumes-dir=/tmp/test-volumesoc cluster downrm -rf /tmp/test-volumesCannot clear out the volumes directory, e.g.
rm: cannot remove '/tmp/test-volumes/pods/61cadc36-fab1-11e6-abd2-507b9dcf147a/volumes/kubernetes.io~secret/router-token-imy4z': Device or resource busy
oc cluster down to return to umount'd state
Little workaround line to umount (but make sure the grep works for you!)
for m in $(mount | grep pods | awk '{print $3}'); do umount $m ; done
I'd recommend findmnt -r -n -o TARGET |grep /var/lib/origin | xargs -r umount as it's less string parsing. (In general, I always use findmnt nowadays to display mounts)
Those mounts are eventually removed by the kubelet when you start the cluster again, but yes agree that they could be removed.
This is an enhancement, created a trello card for it
https://trello.com/c/rPosa4HO/1253-oc-cluster-down-remove-mounts-created-during-openshift-run
Most helpful comment
I'd recommend
findmnt -r -n -o TARGET |grep /var/lib/origin | xargs -r umountas it's less string parsing. (In general, I always usefindmntnowadays to display mounts)