Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Critical bug in kubernetes v1.10.x
See: https://github.com/kubernetes/kubernetes/issues/62382
Please provide the following details:
Environment:
Win64
Minikube version (use minikube version): 0.28.2
What happened:
When a job pod fails, kubernetes ignores backoffLimit and creates an infinite number of pods. See:
https://github.com/kubernetes/kubernetes/issues/62382
What you expected to happen:
The job should stop creating pods after reaching the backoff limit.
How to reproduce it (as minimally and precisely as possible):
Create a failing job with .spec.backoffLimit set to 3
Output of minikube logs (if applicable):
Anything else do we need to know:
Fixed in Kubernetes 1.10.5
Thank you!
Until we get a release with this out, you should be able to specify version 1.10.5 on your own:
$ minikube start --kubernetes-version=v1.10.5
Starting local Kubernetes v1.10.5 cluster...
Starting VM...
Still not fixed in latest minikube release. This bug seems critical to me.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Obsolete, since we're at v1.13.2 nowadays.
Most helpful comment
Until we get a release with this out, you should be able to specify version 1.10.5 on your own: