minikube start does not check if it is already running

Created on 24 Mar 2018  路  11Comments  路  Source: kubernetes/minikube

Environment:

  • Minikube version : _v0.25.2_
  • OS : _Mac OS X 10.13.3_
  • VM Driver : _virtualbox_
  • ISO version : _v0.25.1_

What happened:
When I start minikube minikube start, I got:

Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

And the minikube is started and I can use it perfectly.
But even I repeat the minikube start, i get the same message:

Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

What you expected to happen:
It's expected to get:

The 'minikube' VM is already running.

Or something else?!

How to reproduce it (as minimally and precisely as possible):
Normal installation, no special configuration or tweaks applied

Output of minikube logs (if applicable):
N/A

Anything else do we need to know:
N/A

cvirtualbox good first issue kinbug triagobsolete

Most helpful comment

The issue cannot be closed, as the issue is active until now 馃敘
minikube version: v0.28.0

All 11 comments

What does minikube status say ?

The minikube status gives:

minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

My bad, I thought there was something wrong with the detection.

But I see that there is nothing in start that is checking the status...

Yep 馃槶 The function already exists in _minishift_:

func ensureNotRunning(client *libmachine.Client, machineName string) {
    if !cmdUtil.VMExists(client, machineName) {
        return
    }

    hostVm, err := client.Load(constants.MachineName)
    if err != nil {
        atexit.ExitWithMessage(1, err.Error())
    }

    if cmdUtil.IsHostRunning(hostVm.Driver) {
        atexit.ExitWithMessage(0, fmt.Sprintf("The '%s' VM is already running.", machineName))
    }
}

I don't have any golang skills 馃槩 otherwise, I would do the refactor to solve the issue.

I don't think any of those utils (cmdUtil) even exist in minikube, so probably minishift only.

The problem with the detection is that it doesn't remember which bootstrapper was used...

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

The issue cannot be closed, as the issue is active until now 馃敘
minikube version: v0.28.0

/remove-lifecycle stale

/assign @ravsa

@ravsa: GitHub didn't allow me to assign the following users: ravsa.

Note that only kubernetes members and repo collaborators can be assigned.
For more information please see the contributor guide

In response to this:

/assign @ravsa

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

There seems to be an implicit behavior expectation that a second 'minikube start' certifies that all the components are up and running, making any changes necessary to do so. I think that behavior is OK.

We however don't hint on the console that this is the case, except by saying that we're 'restarting components'. We can do better than that, I think.

That said, this bug is obsolete - minikube start does in fact check nowadays.

Was this page helpful?
0 / 5 - 0 ratings