Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Minikube version (use minikube version
): v0.15.0
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): xhyvedocker -v
): Docker version 1.13.0, build 49bf474What happened:
minikube node reported OutOfDisk condition. Events included messages "failed to garbage collect required amount of images. Wanted to free 3832631296, but freed 0"
What you expected to happen:
Kubelet would be handling garbage collection of images based on default values of --eviction-hard or eviction-soft flags.
How to reproduce it (as minimally and precisely as possible):
Created and deleted deployment multiple times with same details. Each time it was pulling updated image (~5gb image) with same tag - latest. Started reporting node OutOfDisk after 3rd iteration.
Anything else do we need to know:
> kubectl describe nodes
Name: minikube
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=minikube
Taints: <none>
CreationTimestamp: Wed, 11 Jan 2017 11:16:29 -0500
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk True Tue, 24 Jan 2017 02:33:14 -0500 Mon, 23 Jan 2017 22:27:53 -0500 KubeletOutOfDisk out of disk space
MemoryPressure False Tue, 24 Jan 2017 02:33:14 -0500 Wed, 11 Jan 2017 11:16:29 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 24 Jan 2017 02:33:14 -0500 Wed, 11 Jan 2017 11:16:29 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Tue, 24 Jan 2017 02:33:14 -0500 Mon, 23 Jan 2017 04:34:14 -0500 KubeletReady kubelet is posting ready status
Addresses: 192.168.64.92,192.168.64.92,minikube
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 6114232Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 6114232Ki
pods: 110
System Info:
Machine ID: 4afeb72c83e84b6b83c27b30041e0f50
System UUID: 7ADB4CBB-0000-0000-B162-33912B15AEA2
Boot ID: df85f990-1cf7-4eb0-bff2-90b292efe77e
Kernel Version: 4.7.2
OS Image: Buildroot 2016.08
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.11.1
Kubelet Version: v1.5.1
Kube-Proxy Version: v1.5.1
ExternalID: minikube
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default default-http-backend-1k5l7 10m (0%) 10m (0%) 20Mi (0%) 20Mi (0%)
default jupyter-pyspark-3001675398-j53mx 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default nginx-ingress-controller-576wk 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-addon-manager-minikube 5m (0%) 0 (0%) 50Mi (0%) 0 (0%)
kube-system kube-dns-v20-xmczh 110m (5%) 0 (0%) 120Mi (2%) 220Mi (3%)
kube-system kubernetes-dashboard-rh8tl 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
125m (6%) 10m (0%) 190Mi (3%) 240Mi (4%)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17h 13h 48 {kubelet minikube} Warning FreeDiskSpaceFailed failed to garbage collect required amount of images. Wanted to free 3832631296, but freed 0
17h 13h 48 {kubelet minikube} Warning ImageGCFailed failed to garbage collect required amount of images. Wanted to free 3832631296, but freed 0
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
/close