Minikube: none: Allow host IP to be settable

Created on 23 Apr 2018  路  17Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.26.1

  • OS (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): none
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):
  • Install tools:
  • Others:

What happened: I'm getting a broken cluster everytime my host IP changes. It looks like the Pods keep trying to reach the host using the old IP. I tried to set the host IP by starting minikube with the following command, but it did not work:

minikube start --vm-driver=none --apiserver-name=localhost --apiserver-ips=127.0.0.1 --extra-config=kubelet.node-ip=127.0.0.1

What you expected to happen: I would like to be able to set the host IP to make my minikube installation survive network changes.

How to reproduce it (as minimally and precisely as possible):

  1. Start a minikube cluster using vm-driver=none
  2. Deploy an application to the cluster (It can be the dashboard)
  3. Stop minikube
  4. Change the host IP address
  5. Restart minikube
  6. Try to access the application.

Output of minikube logs (if applicable):

Anything else do we need to know:

arenetworking cnone-driver help wanted kinfeature lifecyclstale prioritbacklog 2019q2

Most helpful comment

@elioengcomp

Mabye you can reuse minikube after ip changes by running:

export CHANGE_MINIKUBE_NONE_USER=true &&  minikube update-context && sudo systemctl restart kubelet && sudo minikube start --vm-driver=none --memory=8192 --kubernetes-version=v1.14.3

^ that does not solve the problem but at least you can still using minikube without reinstalling it.

I updated this comment replacing the command by the one i have tested on my machine.

maybe you have to run minikube stop before running it

All 17 comments

i believe you are supposed to specify a bridge network.
from the readme:

NOTE: Minikube also supports a --vm-driver=none option that runs the Kubernetes components on the host and not in a VM. Docker is required to use this driver but no hypervisor. If you use --vm-driver=none, be sure to specify a bridge network for docker. Otherwise it might change between network restarts, causing loss of connectivity to your cluster.

@margh can you provide more details about this?

My daemon.json looks like this:

{
    "bip": "172.17.1.0/16"
}

Second the above question. I'm struggling to find clear instructions on how to create a bridge network. The closest thing I can find is this, which doesn't really explain anything or solve the problem.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

--vm-driver=none isn't just for hosts it can be for VMs already created, which is what I use it for.

Except I'd like to be able to choose what interface or IP to bind the cluster.

Right now, I sort of cheat, I use the Docker IP network and route to it from outside the VM, but trying to use Ingress with minikube doesn't work since it won't let me set the IP i want the cluster to be available on (to connect to the pods within the cluster) and Flannel with minikube doesn't work (the example yaml fails for some reason).

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Looks like I may have been mistaken about this being fixed.

minikube start --vm-driver=none --apiserver-name=localhost --apiserver-ips=127.0.0.1 --extra-conf=kubelet.node-ip=127.0.0.1

did you mean --extra-config not --extra-conf ?

I think so. Just fixed it.

@elioengcomp

Mabye you can reuse minikube after ip changes by running:

export CHANGE_MINIKUBE_NONE_USER=true &&  minikube update-context && sudo systemctl restart kubelet && sudo minikube start --vm-driver=none --memory=8192 --kubernetes-version=v1.14.3

^ that does not solve the problem but at least you can still using minikube without reinstalling it.

I updated this comment replacing the command by the one i have tested on my machine.

maybe you have to run minikube stop before running it

Thanks for the tip @rodjjo. It has been a while since I was working on this. I will give it a try when I have the chance.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Closing as there is apparently a workaround.

Closing as there is apparently a workaround.

and what is that ? Can you copy that again here ?

Was this page helpful?
0 / 5 - 0 ratings