Is this a BUG REPORT or FEATURE REQUEST?: Bug report
Minikube version: 0.20.0
Environment:
What happened:
Used kubectl apply on a deployment to create a new rollout, the old rollout pods are stuck Terminating indefinitely. I later saw the same behavior when running kubectl delete to delete pods.
What you expected to happen:
The pods terminate in a timely manner.
How to reproduce it
kubectl delete pods/my-pod under unknown conditions (sorry I cannot be more helpful)
Anything else do we need to know:
I am seeing a lot of messages like this in the minikube logs:
Jun 22 00:54:46 minikube localkube[24961]: W0622 00:54:46.755261 24961 docker_sandbox.go:263] Couldn't find network status for default/marketplace-fulfillment-worker-2843865375-qcbbl through plugin: invalid network status for
Jun 22 00:54:46 minikube localkube[24961]: E0622 00:54:46.769107 24961 remote_runtime.go:273] ContainerStatus "ccd9f747a0fa986bc503b31c8db29e6305903e85ff1bf07d45a83d4880fea571" from runtime service failed: rpc error: code = 2 desc = Error: No such container: ccd9f747a0fa986bc503b31c8db29e6305903e85ff1bf07d45a83d4880fea571
Jun 22 00:54:46 minikube localkube[24961]: E0622 00:54:46.769278 24961 kuberuntime_container.go:385] ContainerStatus for ccd9f747a0fa986bc503b31c8db29e6305903e85ff1bf07d45a83d4880fea571 error: rpc error: code = 2 desc = Error: No such container: ccd9f747a0fa986bc503b31c8db29e6305903e85ff1bf07d45a83d4880fea571
Jun 22 00:54:46 minikube localkube[24961]: E0622 00:54:46.769407 24961 kuberuntime_manager.go:858] getPodContainerStatuses for pod "marketplace-fulfillment-worker-2843865375-qcbbl_default(91b4e3e9-56df-11e7-9956-56d557a39cd2)" failed: rpc error: code = 2 desc = Error: No such container: ccd9f747a0fa986bc503b31c8db29e6305903e85ff1bf07d45a83d4880fea571
Jun 22 00:54:46 minikube localkube[24961]: E0622 00:54:46.838184 24961 generic.go:269] PLEG: pod marketplace-fulfillment-worker-2843865375-qcbbl/default failed reinspection: rpc error: code = 2 desc = Error: No such container: ccd9f747a0fa986bc503b31c8db29e6305903e85ff1bf07d45a83d4880fea571
I am also intermittently losing connectivity to the VM so kubectl commands fail.
I am also seeing errors with Pods that stay in a Terminating state forever. (minikube 0.20, Kube 1.6.4, VirtualBox, MacOs)
In my case it happens after I deploy a helm chart, and then undeploy.
The pods are "deleted" - but they stay in the terminated state.
Deleting with --grace-period=0 causes kubectl to hang
There is a steady stream of errors in the minikube logs, and very high CPU usage
A sample:
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.762799 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/configmap/3c208ca8-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~250044545.deleting~783644684.deleting~360907959\" (\"3c208ca8-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.762785862 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/configmap/3c208ca8-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~250044545.deleting~783644684.deleting~360907959" (volume.spec.Name: "wrapped_wrapped_wrapped_scripts.deleting~250044545.deleting~783644684.deleting~360907959") pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" (UID: "3c208ca8-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c208ca8-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_wrapped_scripts.deleting~250044545.deleting~783644684.deleting~360907959 /var/lib/kubelet/pods/3c208ca8-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_wrapped_wrapped_scripts.deleting~250044545.deleting~783644684.deleting~360907959.deleting~074689311: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.762926 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~009863097.deleting~388940059.deleting~648706143\" (\"3c283251-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.762910129 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~009863097.deleting~388940059.deleting~648706143" (volume.spec.Name: "openam-root.deleting~009863097.deleting~388940059.deleting~648706143") pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~009863097.deleting~388940059.deleting~648706143 /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~009863097.deleting~388940059.deleting~648706143.deleting~489980402: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763045 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/36aed755-56db-11e7-9035-080027f5f6bb-git.deleting~061543830.deleting~484929630.deleting~950787319\" (\"36aed755-56db-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763021823 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/36aed755-56db-11e7-9035-080027f5f6bb-git.deleting~061543830.deleting~484929630.deleting~950787319" (volume.spec.Name: "git.deleting~061543830.deleting~484929630.deleting~950787319") pod "36aed755-56db-11e7-9035-080027f5f6bb" (UID: "36aed755-56db-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/36aed755-56db-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/git.deleting~061543830.deleting~484929630.deleting~950787319 /var/lib/kubelet/pods/36aed755-56db-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/git.deleting~061543830.deleting~484929630.deleting~950787319.deleting~417803945: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763213 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/36af0061-56db-11e7-9035-080027f5f6bb-openam-root.deleting~949726280.deleting~046797214.deleting~215740282\" (\"36af0061-56db-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763200022 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/36af0061-56db-11e7-9035-080027f5f6bb-openam-root.deleting~949726280.deleting~046797214.deleting~215740282" (volume.spec.Name: "openam-root.deleting~949726280.deleting~046797214.deleting~215740282") pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/36af0061-56db-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~949726280.deleting~046797214.deleting~215740282 /var/lib/kubelet/pods/36af0061-56db-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~949726280.deleting~046797214.deleting~215740282.deleting~560420340: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763318 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~753850132.deleting~018396276.deleting~937273541\" (\"3c283251-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763306667 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~753850132.deleting~018396276.deleting~937273541" (volume.spec.Name: "openam-root.deleting~753850132.deleting~018396276.deleting~937273541") pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~753850132.deleting~018396276.deleting~937273541 /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~753850132.deleting~018396276.deleting~937273541.deleting~517423811: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763399 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_log-config.deleting~913454640.deleting~126240049\" (\"3c283251-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763387688 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_log-config.deleting~913454640.deleting~126240049" (volume.spec.Name: "wrapped_wrapped_log-config.deleting~913454640.deleting~126240049") pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_log-config.deleting~913454640.deleting~126240049 /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_wrapped_log-config.deleting~913454640.deleting~126240049.deleting~220510022: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763490 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/configmap/3c208ca8-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~740625917.deleting~243873026.deleting~207123086\" (\"3c208ca8-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763473292 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/configmap/3c208ca8-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~740625917.deleting~243873026.deleting~207123086" (volume.spec.Name: "wrapped_wrapped_wrapped_scripts.deleting~740625917.deleting~243873026.deleting~207123086") pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" (UID: "3c208ca8-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c208ca8-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_wrapped_scripts.deleting~740625917.deleting~243873026.deleting~207123086 /var/lib/kubelet/pods/3c208ca8-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_wrapped_wrapped_scripts.deleting~740625917.deleting~243873026.deleting~207123086.deleting~222158829: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763569 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~696363135.deleting~234190726\" (\"3c5cdc8f-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763557443 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~696363135.deleting~234190726" (volume.spec.Name: "dj-backup.deleting~696363135.deleting~234190726") pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" (UID: "3c5cdc8f-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c5cdc8f-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/dj-backup.deleting~696363135.deleting~234190726 /var/lib/kubelet/pods/3c5cdc8f-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/dj-backup.deleting~696363135.deleting~234190726.deleting~578955324: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763663 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~293060514.deleting~240033409.deleting~359563794\" (\"3c5cdc8f-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.76365104 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~293060514.deleting~240033409.deleting~359563794" (volume.spec.Name: "dj-backup.deleting~293060514.deleting~240033409.deleting~359563794") pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" (UID: "3c5cdc8f-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c5cdc8f-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/dj-backup.deleting~293060514.deleting~240033409.deleting~359563794 /var/lib/kubelet/pods/3c5cdc8f-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/dj-backup.deleting~293060514.deleting~240033409.deleting~359563794.deleting~678995048: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763762 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_openam-boot.deleting~565864336.deleting~700627738\" (\"3c283251-56eb-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763732052 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_openam-boot.deleting~565864336.deleting~700627738" (volume.spec.Name: "wrapped_wrapped_openam-boot.deleting~565864336.deleting~700627738") pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_openam-boot.deleting~565864336.deleting~700627738 /var/lib/kubelet/pods/3c283251-56eb-11e7-9035-080027f5f6bb/volumes/kubernetes.io~configmap/wrapped_wrapped_wrapped_openam-boot.deleting~565864336.deleting~700627738.deleting~990637735: file exists
Jun 22 01:54:59 minikube localkube[3573]: E0622 01:54:57.763846 3573 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/empty-dir/36af0061-56db-11e7-9035-080027f5f6bb-openam-root.deleting~077162911.deleting~127222673.deleting~884728814\" (\"36af0061-56db-11e7-9035-080027f5f6bb\")" failed. No retries permitted until 2017-06-22 01:55:13.763834104 +0000 UTC (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/empty-dir/36af0061-56db-11e7-9035-080027f5f6bb-openam-root.deleting~077162911.deleting~127222673.deleting~884728814" (volume.spec.Name: "openam-root.deleting~077162911.deleting~127222673.deleting~884728814") pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb") with: rename /var/lib/kubelet/pods/36af0061-56db-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~077162911.deleting~127222673.deleting~884728814 /var/lib/kubelet/pods/36af0061-56db-11e7-9035-080027f5f6bb/volumes/kubernetes.io~empty-dir/openam-root.deleting~077162911.deleting~127222673.deleting~884728814.deleting~639639514: file exists
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.810345 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/36aed755-56db-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~401613306.deleting~258267840.deleting~035286730" (spec.Name: "wrapped_wrapped_wrapped_scripts.deleting~401613306.deleting~258267840.deleting~035286730") from pod "36aed755-56db-11e7-9035-080027f5f6bb" (UID: "36aed755-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.810548 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~571838268.deleting~213735135.deleting~945140026" (spec.Name: "openam-root.deleting~571838268.deleting~213735135.deleting~945140026") from pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.810646 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/3c208ca8-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~429837231.deleting~904682691.deleting~457146539" (spec.Name: "wrapped_wrapped_wrapped_scripts.deleting~429837231.deleting~904682691.deleting~457146539") from pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" (UID: "3c208ca8-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.810776 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/36aed755-56db-11e7-9035-080027f5f6bb-git.deleting~296354577.deleting~512954261.deleting~399580764" (spec.Name: "git.deleting~296354577.deleting~512954261.deleting~399580764") from pod "36aed755-56db-11e7-9035-080027f5f6bb" (UID: "36aed755-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.810899 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/36af0061-56db-11e7-9035-080027f5f6bb-openam-root.deleting~833208316.deleting~625230020.deleting~466087021" (spec.Name: "openam-root.deleting~833208316.deleting~625230020.deleting~466087021") from pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811024 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c208ca8-56eb-11e7-9035-080027f5f6bb-git.deleting~538330946.deleting~478598931.deleting~158173546" (spec.Name: "git.deleting~538330946.deleting~478598931.deleting~158173546") from pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" (UID: "3c208ca8-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811179 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/36af0061-56db-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_openam-boot.deleting~178131608.deleting~890207986.deleting~769780177" (spec.Name: "wrapped_wrapped_wrapped_openam-boot.deleting~178131608.deleting~890207986.deleting~769780177") from pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811283 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/36af0061-56db-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_log-config.deleting~024774842.deleting~323724772.deleting~365930500" (spec.Name: "wrapped_wrapped_wrapped_log-config.deleting~024774842.deleting~323724772.deleting~365930500") from pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811389 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~009863097.deleting~725682259.deleting~187179817" (spec.Name: "openam-root.deleting~009863097.deleting~725682259.deleting~187179817") from pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811490 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c208ca8-56eb-11e7-9035-080027f5f6bb-git.deleting~024102186.deleting~204106291.deleting~629637238" (spec.Name: "git.deleting~024102186.deleting~204106291.deleting~629637238") from pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" (UID: "3c208ca8-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811600 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~098528350.deleting~412122835.deleting~154765303" (spec.Name: "dj-backup.deleting~098528350.deleting~412122835.deleting~154765303") from pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" (UID: "3c5cdc8f-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811708 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/36c74db0-56db-11e7-9035-080027f5f6bb-dj-backup.deleting~071149860.deleting~533890128.deleting~770414206" (spec.Name: "dj-backup.deleting~071149860.deleting~533890128.deleting~770414206") from pod "36c74db0-56db-11e7-9035-080027f5f6bb" (UID: "36c74db0-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811859 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/36aed755-56db-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~725067105.deleting~043104475.deleting~890084700" (spec.Name: "wrapped_wrapped_wrapped_scripts.deleting~725067105.deleting~043104475.deleting~890084700") from pod "36aed755-56db-11e7-9035-080027f5f6bb" (UID: "36aed755-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.811969 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/36af0061-56db-11e7-9035-080027f5f6bb-openam-root.deleting~949726280.deleting~058207803.deleting~824241812" (spec.Name: "openam-root.deleting~949726280.deleting~058207803.deleting~824241812") from pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.812073 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~205098446.deleting~580097784" (spec.Name: "dj-backup.deleting~205098446.deleting~580097784") from pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" (UID: "3c5cdc8f-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.812303 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~680434739.deleting~176092589.deleting~907450805" (spec.Name: "dj-backup.deleting~680434739.deleting~176092589.deleting~907450805") from pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" (UID: "3c5cdc8f-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.812434 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c283251-56eb-11e7-9035-080027f5f6bb-openam-root.deleting~978385388.deleting~493079688.deleting~741981533" (spec.Name: "openam-root.deleting~978385388.deleting~493079688.deleting~741981533") from pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.812617 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/36af0061-56db-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_openam-boot.deleting~280350834.deleting~718838496.deleting~055867134" (spec.Name: "wrapped_wrapped_wrapped_openam-boot.deleting~280350834.deleting~718838496.deleting~055867134") from pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813088 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_openam-boot.deleting~266937435.deleting~041337035.deleting~965983860" (spec.Name: "wrapped_wrapped_wrapped_openam-boot.deleting~266937435.deleting~041337035.deleting~965983860") from pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813213 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/36aed755-56db-11e7-9035-080027f5f6bb-git.deleting~080943424.deleting~397033111.deleting~738235862" (spec.Name: "git.deleting~080943424.deleting~397033111.deleting~738235862") from pod "36aed755-56db-11e7-9035-080027f5f6bb" (UID: "36aed755-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813342 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/3c5cdc8f-56eb-11e7-9035-080027f5f6bb-dj-backup.deleting~205098446.deleting~242361450" (spec.Name: "dj-backup.deleting~205098446.deleting~242361450") from pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" (UID: "3c5cdc8f-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813498 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/36af0061-56db-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_openam-boot.deleting~984662566.deleting~062404482.deleting~826881748" (spec.Name: "wrapped_wrapped_wrapped_openam-boot.deleting~984662566.deleting~062404482.deleting~826881748") from pod "36af0061-56db-11e7-9035-080027f5f6bb" (UID: "36af0061-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813600 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_openam-boot.deleting~802682477.deleting~572069260.deleting~869132844" (spec.Name: "wrapped_wrapped_wrapped_openam-boot.deleting~802682477.deleting~572069260.deleting~869132844") from pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813697 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/3c283251-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_log-config.deleting~845774119.deleting~823875221.deleting~238246609" (spec.Name: "wrapped_wrapped_wrapped_log-config.deleting~845774119.deleting~823875221.deleting~238246609") from pod "3c283251-56eb-11e7-9035-080027f5f6bb" (UID: "3c283251-56eb-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813813 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/empty-dir/36c74db0-56db-11e7-9035-080027f5f6bb-dj-backup.deleting~365052576.deleting~862664128.deleting~631217262" (spec.Name: "dj-backup.deleting~365052576.deleting~862664128.deleting~631217262") from pod "36c74db0-56db-11e7-9035-080027f5f6bb" (UID: "36c74db0-56db-11e7-9035-080027f5f6bb").
Jun 22 01:54:59 minikube localkube[3573]: I0622 01:54:57.813917 3573 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/3c208ca8-56eb-11e7-9035-080027f5f6bb-wrapped_wrapped_wrapped_scripts.deleting~429837231.deleting~375792607.deleting~940953291" (spec.Name: "wrapped_wrapped_wrapped_scripts.deleting~429837231.deleting~375792607.deleting~940953291") from pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" (UID: "3c208ca8-56eb-11e7-9035-080027f5f6bb").
A little more info:
Once minikube gets into this weird state, it seems you can not recover from this with a simple
minikube stop
minikube start
I needed to blow away the VM with minikube delete
I would guess that etcd/ still has bogus data that is cached.
Lots of these errors on startup:
Jun 22 02:17:40 minikube localkube[3434]: E0622 02:17:40.607778 3434 kubelet_volumes.go:114] Orphaned pod "36af0061-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:40 minikube localkube[3434]: E0622 02:17:40.827542 3434 kubelet_volumes.go:114] Orphaned pod "36c74db0-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:41 minikube localkube[3434]: E0622 02:17:41.248235 3434 kubelet_volumes.go:114] Orphaned pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:41 minikube localkube[3434]: E0622 02:17:41.961442 3434 kubelet_volumes.go:114] Orphaned pod "3c283251-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.132810 3434 kubelet_volumes.go:114] Orphaned pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.284638 3434 kubelet_volumes.go:114] Orphaned pod "36aed755-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.511586 3434 kubelet_volumes.go:114] Orphaned pod "36af0061-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.545734 3434 kubelet_volumes.go:114] Orphaned pod "36c74db0-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.620989 3434 kubelet_volumes.go:114] Orphaned pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.834316 3434 kubelet_volumes.go:114] Orphaned pod "3c283251-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:42 minikube localkube[3434]: E0622 02:17:42.871839 3434 kubelet_volumes.go:114] Orphaned pod "3c5cdc8f-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:44 minikube localkube[3434]: E0622 02:17:44.233344 3434 kubelet_volumes.go:114] Orphaned pod "36aed755-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:44 minikube localkube[3434]: E0622 02:17:44.366751 3434 kubelet_volumes.go:114] Orphaned pod "36af0061-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:44 minikube localkube[3434]: E0622 02:17:44.402685 3434 kubelet_volumes.go:114] Orphaned pod "36c74db0-56db-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Jun 22 02:17:44 minikube localkube[3434]: E0622 02:17:44.476345 3434 kubelet_volumes.go:114] Orphaned pod "3c208ca8-56eb-11e7-9035-080027f5f6bb" found, but volume paths are still present on disk.
Trying to repro this now. Are you only seeing this on xhyve? Anything special in the pod yamls?
@dlorenc I am on VirtualBox. I don't think this is a VM driver issue
Did the docker server version possibly change with 0.20?
This issue seems to have something to do with resource clean up of configmaps, secrets, emptydirs, etc
I was able to induce this state using helm, and trying to install into two different namespaces:
helm install stable/grafana
helm install --namespace=test stable/grafana
Then delete the helm releases and watch the minikube logs
Did the docker server version possibly change with 0.20?
I don't think so. It should still be 1.11.1.
This issue seems to have something to do with resource clean up of configmaps, secrets, emptydirs, etc
Thanks, I'll try with those helm charts. A simple busybox.yaml pod isn't enough to trigger this state.
Was able to repro. Looks like this might be related to https://github.com/kubernetes/kubernetes/issues/43534
We might have switched Go versions in this release. I'll check.
Yeah, I can confirm that minikube 0.20.0 was built with Go 1.8.3, and kubernetes v 1.6.4 doesn't have the fix for the emptydir teardown issue introduced by that version of go: https://github.com/kubernetes/kubernetes/blob/v1.6.4/pkg/volume/empty_dir/empty_dir.go#L332
I think we'll have to rebuild minikube with Go 1.7.6, or take a patch to empty_dir.go.
cc @r2d4 what do you think?
I think we can rebuild with 1.7.5. A bit unfortunate, since I just upgraded the go version on the build slaves yesterday in preparation for k8s-1.7, which requires >= go 1.8.
Should we just re-upload the 0.20 binaries rebuilt with the proper go version? Or do a patch release? Either way I can handle it
Should we just re-upload the 0.20 binaries rebuilt with the proper go version? Or do a patch release? Either way I can handle it
I think re-uploading would make sense, unless we actually need to take a code change to switch the builds back to 1.7.6.
Ok - I've downgraded the build slave to go 1.7.5 and I'm rebuilding the binaries.
I've rebuilt all the binaries and installers with go 1.7.5. The github download links and the https://storage.googleapis.com links should now all be correct, as well as the releases.json has been pushed.
I did a smoke test with https://github.com/kubernetes/minikube/issues/1630#issuecomment-310260408 and this fixed the issue for me on OSX with virtualbox.
Edit: I was also able to verify on Linux.
Sorry for all the confusion, I'll keep this open for a bit in case others run into the problem.
tl;dr
Redownload the 0.20.0 binaries and recreate your minikube vms if you encountered this bug.
Thanks @r2d4 (and @dlorenc) :) for those installing via Homebrew, I submitted a PR for Homebrew Cask to update the checksums -- https://github.com/caskroom/homebrew-cask/pull/35770
@r2d4 The pods seem to go away, but I'm still getting a flood of messages like this after deleting them:
Jun 22 06:40:48 minikube localkube[3376]: W0622 06:40:48.372452 3376 docker_sandbox.go:263] Couldn't find network status for default/marketplace-billing-worker-2132744087-10712 through plugin: invalid network status for
Jun 22 06:40:48 minikube localkube[3376]: E0622 06:40:48.373911 3376 remote_runtime.go:273] ContainerStatus "62c3a0ee7423fe116b9efcde3cb8f04e087b080862ffb34bcb2854a510d9d40e" from runtime service failed: rpc error: code = 2 desc = Error: No such container: 62c3a0ee7423fe116b9efcde3cb8f04e087b080862ffb34bcb2854a510d9d40e
Jun 22 06:40:48 minikube localkube[3376]: E0622 06:40:48.373929 3376 kuberuntime_container.go:385] ContainerStatus for 62c3a0ee7423fe116b9efcde3cb8f04e087b080862ffb34bcb2854a510d9d40e error: rpc error: code = 2 desc = Error: No such container: 62c3a0ee7423fe116b9efcde3cb8f04e087b080862ffb34bcb2854a510d9d40e
Jun 22 06:40:48 minikube localkube[3376]: E0622 06:40:48.373935 3376 kuberuntime_manager.go:858] getPodContainerStatuses for pod "marketplace-billing-worker-2132744087-10712_default(b8f0ee87-5714-11e7-8b7d-425b16832032)" failed: rpc error: code = 2 desc = Error: No such container: 62c3a0ee7423fe116b9efcde3cb8f04e087b080862ffb34bcb2854a510d9d40e
Jun 22 06:40:48 minikube localkube[3376]: E0622 06:40:48.373945 3376 generic.go:269] PLEG: pod marketplace-billing-worker-2132744087-10712/default failed reinspection: rpc error: code = 2 desc = Error: No such container: 62c3a0ee7423fe116b9efcde3cb8f04e087b080862ffb34bcb2854a510d9d40e
but no pods around:
位 kubectl get pods
No resources found.
something still seems to be a little weird
I think somehow the checksum for the linux binary was mixed up or something. See #1632.
This should be fixed now with the newly built binaries.
This should be fixed now with the newly built binaries.
I'm not entirely sure that this issue is fixed; I'm experiencing what looks like the same issue still with minikube v0.22.3 and Kubernetes v1.6.4. (minikube start --kubernetes-version v1.6.4)
If I run a pod with a volume and then delete the pod, it gets stuck in Terminating state, while localkube/kubelet logs these errors, as it apparently keeps trying and failing again and again to unmount the volume:
Oct 24 12:07:00 minikube localkube[2632]: I1024 12:07:00.079833 2632 reconciler.go:190] UnmountVolume operation started for volume "kubernetes.io/configmap/a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6-volumedebug-config" (spec.Name: "volumedebug-config") from pod "a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6" (UID: "a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6").
Oct 24 12:07:00 minikube localkube[2632]: E1024 12:07:00.080785 2632 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/configmap/a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6-volumedebug-config\" (\"a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6\")" failed. No retries permitted until 2017-10-24 12:08:04.080738066 +0000 UTC (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/configmap/a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6-volumedebug-config" (volume.spec.Name: "volumedebug-config") pod "a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6" (UID: "a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6") with: rename /var/lib/kubelet/pods/a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6/volumes/kubernetes.io~configmap/volumedebug-config /var/lib/kubelet/pods/a6a2e13b-b8b3-11e7-9caf-aa8585e9a6b6/volumes/kubernetes.io~configmap/wrapped_volumedebug-config.deleting~280377916: file exists
I can reproduce the issue consistently both with the virtualbox and xhyve driver with the following example:
apiVersion: v1
kind: Pod
metadata:
name: volumedebug
spec:
volumes:
- configMap:
defaultMode: 420
name: volumedebug
optional: true
name: volumedebug-config
containers:
- name: volumedebug
image: nginx:1.13.5
volumeMounts:
- mountPath: /volumedebug
name: volumedebug-config
readOnly: true
It seems to work fine with Kubernetes 1.7.5, but I'm using minikube to run integration/e2e tests for a piece of software against Kubernetes 1.6.x, so it would be ideal if that worked too.
Is there anything I can do to help get this resolved?
@oyvindio The problem is a little tricky, but can you try deleting ~/.minikube/cache/localkube and try the 1.6.4 version again?
Minikube ships with a localkube version inside the binary, using go-bin-data. I updated the minikube and the localkube binaries earlier in this issue, but when users use the --kubernetes-version flag for a non-default version, it only fetches a localkube binary from gs://minikube/k8sReleases/* which I don't think I updated.
I've reuploaded the right v1.6.4 localkube
@r2d4 It appears to be working now; I can't reproduce the error. Thanks for the help!
Most helpful comment
I've rebuilt all the binaries and installers with go 1.7.5. The github download links and the https://storage.googleapis.com links should now all be correct, as well as the releases.json has been pushed.
I did a smoke test with https://github.com/kubernetes/minikube/issues/1630#issuecomment-310260408 and this fixed the issue for me on OSX with virtualbox.
Edit: I was also able to verify on Linux.
Sorry for all the confusion, I'll keep this open for a bit in case others run into the problem.
tl;dr
Redownload the 0.20.0 binaries and recreate your minikube vms if you encountered this bug.