Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG?
Minikube version (use minikube version
):
minikube version: v0.9.0
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): VMWarekubectl version
): v1.3.5docker -v
): 1.11.1What happened:
New pods instantly terminates and no logs or any error is provided. There is no way of figuring out why they insta-terminated.
What you expected to happen:
They should stay running or show some kind of error why they were terminated.
How to reproduce it (as minimally and precisely as possible):
Create a Minikube VM with default memory (1024 MB) and create a few pods and then try to run Elasticsearch (gcr.io/google_containers/elasticsearch:1.9).
Anything else do we need to know:
The default memory should be more than 1024 MB and this should maybe be listed under FAQ WTF.
Does kubernetes not give any indication that its out of memory? Have you tried debugging your pods like this http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/ ?
I'm a little worried about increasing the default memory, but I agree that this should be better documented. If this gets a few thumbs up, we can consider increasing the default VM memory.
For now we'll definitely try to document this better and then additionally see if theres any helpful way to tell the user its out of memory
For non-production elasticsearch clusters (e.g. listens on localhost) it should run fine. Once you specify a non-loopback interface in the elasticsearch configuration the cluster will undergo a cascade of production mode bootstraps. And will JVM will complain if the heapsize is less than 2048 MB
minikube default memory can be increased by minikube config set memory <memory>
Most helpful comment
minikube default memory can be increased by
minikube config set memory <memory>