Minikube: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1

Created on 15 May 2020  路  15Comments  路  Source: kubernetes/minikube


Steps to reproduce the issue:

1.


  1. 3.


Full output of failed command:

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

cdocker-driver kinsupport needs-solution-message prioritimportant-soon top-10-issues triagneeds-information

Most helpful comment

I performed a cleanup using "minikube delete" and then started using
"minikube start --driver=docker"
it worked

All 15 comments

Thank you for sharing your experience! If you don't mind, could you please provide:

  • The exact command-lines used, so that we may replicate the issue
  • The version of minikube.
  • The driver you are using.
  • The Operation system you are using.
  • The output of the "minikube logs"

This will help us isolate the problem further. Thank you!

I too am getting the same issue.
`stderr:
Template parsing error: template: :1:4: executing "" at : error calling index: index of untyped nil

  • Restarting existing docker container for "minikube" ...
  • Failed to start docker container. "minikube start" may fix it: provision: get ssh host-port: get port 22 for "minikube": docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:
    `

If someone runs into this, can they please share the output of:

docker inspect minikube

Thank you.

  • The exact command-lines used, so that we may replicate the issue => minikube start --driver='docker'
  • The version of minikube : minikube version: v1.10.1
    commit: 63ab801ac27e5742ae442ce36dff7877dcccb278
  • The driver you are using: docker
  • The Operation system you are using: Debian Buster (10.04)
  • The output of the "minikube logs" => 馃し The control plane node must be running for this command 馃憠 To fix this, run: "minikube start"

However, debian logs \ displays that:
INFO: ensuring we can execute /bin/mount even with userns-remap INFO: remounting /sys read-only INFO: making mounts shared INFO: fix cgroup mounts for all subsystems INFO: clearing and regenerating /etc/machine-id Initializing machine ID from random generator. INFO: faking /sys/class/dmi/id/product_name to be "kind" INFO: setting iptables to detected mode: legacy update-alternatives: error: no alternatives for iptables INFO: ensuring we can execute /bin/mount even with userns-remap INFO: remounting /sys read-only INFO: making mounts shared INFO: fix cgroup mounts for all subsystems INFO: clearing and regenerating /etc/machine-id Initializing machine ID from random generator. INFO: faking /sys/class/dmi/id/product_name to be "kind" INFO: setting iptables to detected mode: legacy update-alternatives: error: no alternatives for iptables

If someone runs into this, can they please share the output of:
docker inspect minikube

[
    {
        "Id": "f8e6c1bf16b2a0d351906ab6d7612aabddfd054a7472ca4f2377d06ea699562a",
        "Created": "2020-05-28T20:04:59.482457358Z",
        "Path": "/usr/local/bin/entrypoint",
        "Args": [
            "/sbin/init"
        ],
        "State": {
            "Status": "running",
            "Running": true,
...

So, it works now! What did I do? The only thing is I remember (after several docker rm and trying again minikube start) is minikube delete then performing again minikube start!

Was facing the same issue after a fresh minikube install. Only a minikube delete did not solve for me as @j75, I did a docker system prune, minikube delete and then minikube start --driver=docker and it worked fine.

I performed a cleanup using "minikube delete" and then started using
"minikube start --driver=docker"
it worked

If anyone runs into this, could they please provide the output of:

  • docker inspect minikube
  • docker logs minikube

Still trying to reproduce this issue. I tried seeing if https://github.com/kubernetes/minikube/issues/8203 was related by forcing docker to start in systemd without waiting for containerd, but still wasn't able to repro the error

I have a feeling this might be related to https://github.com/kubernetes/minikube/issues/8179, since the logs from the failed container provided in this comment https://github.com/kubernetes/minikube/issues/8163#issuecomment-635563507 are the same

this is an issue that cloud code team has faced too, we should provide better solution message before exiting, or provide better logs

I beleive this happens when minikube tried to create a container, but then docker failed but on a second start there is stuck container, that minikube can not create on top of it.

currently if users specify "--delete-on-failure" as this PR https://github.com/kubernetes/minikube/pull/8628 it will fix the problem.

however we could detect that this is not a recover-able state and we should just delete it for them. even if they don't specify this flag.
this would require some extra care or maybe a prompt from the user. to confirm the delete (if they dont have a any interesting data inside the dead container)

The current work-around:

restart docker and ensure it is running
minikube delete
minikube start

Was facing the same issue after a fresh minikube install. Only a minikube delete did not solve for me as @j75, I did a docker system prune, minikube delete and then minikube start --driver=docker and it worked fine.

this was the solution for me!! thx alot

Hey @dadav glad it's working for you now -- could you please provide the output of minikube version ?

dupe of var race condition

Was this page helpful?
0 / 5 - 0 ratings

Related issues

djschny picture djschny  路  3Comments

Starefossen picture Starefossen  路  3Comments

mdkess picture mdkess  路  3Comments

olalonde picture olalonde  路  3Comments

vainikkaj picture vainikkaj  路  3Comments