Origin: after oc cluster down openshift.local.volumes remain which cannot be deleted

Created on 23 Oct 2018  Â·  6Comments  Â·  Source: openshift/origin


I am using the 3.11 versions from Red Hat using oc cluster up. After doing an oc cluster down and trying to remove all data so I can do a fresh start I am unable to remove a bunch of items:

> [root@ip-XXXXX ec2-user]# rm -Rf persistence2/
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-service-cert-signer-operator-token-rhkhc’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c910ae8-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-dns-token-xt8ws’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c90dc7d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-proxy-token-sdtxb’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-apiserver-token-z5nd4’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/signing-key’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/service-serving-cert-signer-sa-token-zh86w’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/apiservice-cabundle-injector-sa-token-nxs4c’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/6282df44-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-controller-manager-token-2spvq’: Device or resource busy

The only way to remove this is to restart the VM, then prior to starting openshift again, delete the files.

[root@ip-XXXXX ec2-user]# oc version
oc v3.11.23
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
kubernetes v1.11.0+d4cacc0
Version

[provide output of the openshift version or oc version command]

Steps To Reproduce

1.oc cluster up --base-dir='/home/ec2-user/persistence2/' --skip-registry-check=true --public-hostname="ec2-xxxx.ap-southeast-2.compute.amazonaws.com" --routing-suffix="xxxxx.nip.io"

  1. after cluster is up and running, oc cluster down
  2. run rm -Rf /home/ec2-user/persistence2/
Current Result
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-service-cert-signer-operator-token-rhkhc’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c910ae8-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-dns-token-xt8ws’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c90dc7d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-proxy-token-sdtxb’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-apiserver-token-z5nd4’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/signing-key’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/service-serving-cert-signer-sa-token-zh86w’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/apiservice-cabundle-injector-sa-token-nxs4c’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/6282df44-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-controller-manager-token-2spvq’: Device or resource busy
Expected Result

All files under the directory can be deleted.

Additional Information

[try to run $ oc adm diagnostics (or oadm diagnostics) command if possible]
Diagnostics below are run after a fresh start.

[root@ip-XXXXXX ec2-user]# oc adm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/root/.kube/config'
[Note] Could not configure a client with cluster-admin permissions for the current server, so cluster diagnostics will be skipped

[Note] Running diagnostic: ConfigContexts[rhpam71-install-developer/127-0-0-1:8443/developer]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'rhpam71-install-developer/127-0-0-1:8443/developer':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'developer/127-0-0-1:8443'
       The current project is 'rhpam71-install-developer'
       Successfully requested project list; has access to project(s):
         [myproject]

[Note] Running diagnostic: ConfigContexts[openshift-web-console/127-0-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'openshift-web-console/127-0-0-1:8443/system:admin':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'openshift-web-console'
       Successfully requested project list; has access to project(s):
         [default kube-dns kube-proxy kube-public kube-system myproject openshift openshift-apiserver openshift-controller-manager openshift-core-operators ...]

[Note] Running diagnostic: ConfigContexts[default/ec2-XXXXXX-ap-southeast-2-compute-amazonaws-com:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'default/ec2-XXXXXX-ap-southeast-2-compute-amazonaws-com:8443/system:admin':
       The server URL is 'https://ec2-XXXXXX.ap-southeast-2.compute.amazonaws.com:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [default kube-dns kube-proxy kube-public kube-system myproject openshift openshift-apiserver openshift-controller-manager openshift-core-operators ...]

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint

WARN:  [DCli2006 from diagnostic DiagnosticPod@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/client/pod/run_diagnostics_pod.go:187]
       Timed out preparing diagnostic pod logs for streaming, so this diagnostic cannot run.
       It is likely that the image 'registry.redhat.io/openshift3/ose-deployer:v3.11.23' was not pulled and running yet.
       Last error: (*errors.StatusError[2]) container "pod-diagnostics" in pod "pod-diagnostic-test-fnnlb" is waiting to start: image can't be pulled: 

[Note] Summary of diagnostics execution (version v3.11.23):
[Note] Warnings seen: 1

NOTE:
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]

lifecyclrotten

Most helpful comment

https://github.com/openshift/origin/issues/19141 shows workaround:

$ for i in $(mount | grep openshift | awk '{ print $3}'); do sudo umount "$i"; done && sudo rm -rf ./openshift.local.clusterup

All 6 comments

Same here

https://github.com/openshift/origin/issues/19141 shows workaround:

$ for i in $(mount | grep openshift | awk '{ print $3}'); do sudo umount "$i"; done && sudo rm -rf ./openshift.local.clusterup

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings