Minikube: Docker: Failed to setup kubeconfig: inspect IP bridge network

Created on 13 May 2020  Β·  47Comments  Β·  Source: kubernetes/minikube

srahmed@hp:~$ minikube start
πŸ˜„ minikube v1.10.1 on Ubuntu 20.04
✨ Automatically selected the docker driver
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”₯ Creating docker container (CPUs=2, Memory=2200MB) ...
🐳 Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
E0513 19:32:35.665322 20138 start.go:95] Unable to get host IP: inspect IP bridge network "de7f841a3590\n13c6d9205f9b".: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" de7f841a3590
13c6d9205f9b: exit status 1
stdout:

stderr:
Error: No such object: de7f841a3590
13c6d9205f9b

πŸ’£ failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "de7f841a3590\n13c6d9205f9b".: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" de7f841a3590
13c6d9205f9b: exit status 1
stdout:

stderr:
Error: No such object: de7f841a3590
13c6d9205f9b

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose

cdocker-driver kinbug prioritimportant-soon top-10-issues

Most helpful comment

Temporary workaround:

docker swarm leave --force
docker network prune

to make the node leave the Swarm and clean the unused networks, then minikube starts correctly.

All 47 comments

What docker version are you using?

@afbjorklund Would this be related to having an old docker version?

What docker version are you using?

Docker version 19.03.8,

@afbjorklund Would this be related to having an old docker version?

Docker version 19.03.8,

@srehanpk
the error tells that the minikube container doesnt exist anymore.
are you running a system with low amount of ram?
do you mind also pasting the output of :

docker ps -a | grep minikube

and also :

docker network ls

does deleting and recreating help ?
minikube delete
minikube start --driver=docker


it worth noting I just tried it on ubuntu 20.04 and same docker version and I have no issues. and our integration tests are ubuntu and debian too and they never had this problem but I really apperciate it that you made this issue, so we can find the root cause why this happens and we prevent it for others

@sharifelgamal : I don't think so, it seems to be more about the weird linebreak in the ID ?

I think it is due to the filter finding two network bridges, possibly due to #8034

Β docker network ls --filter name=bridge --format "{{.ID}}"

ξ‚°Β docker network create mybridge
69b9393cfa195860f9ac6d14a11e137de1acd03f83dda0c40ed3ac694f51187a
ξ‚°Β docker network ls --filter name=bridge
NETWORK ID          NAME                DRIVER              SCOPE
6d92bb116996        bridge              bridge              local
69b9393cfa19        mybridge            bridge              local
ξ‚°Β docker network ls --filter name=bridge --format "{{.ID}}"
6d92bb116996
69b9393cfa19

@srehanpk
the error tells that the minikube container doesnt exist anymore.
are you running a system with low amount of ram?
do you mind also pasting the output of :

docker ps -a | grep minikube

and also :

docker network ls

does deleting and recreating help ?
minikube delete
minikube start --driver=docker

it worth noting I just tried it on ubuntu 20.04 and same docker version and I have no issues. and our integration tests are ubuntu and debian too and they never had this problem but I really apperciate it that you made this issue, so we can find the root cause why this happens and we prevent it for others
srahmed@hp:~$ docker ps -a | grep minikube
srahmed@hp:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
de7f841a3590 bridge bridge local
13c6d9205f9b docker_gwbridge bridge local
ee09a402584e host host local
k8rroic09yd5 ingress overlay swarm
7c859310e263 none null local

I'm on Ubuntu 18.04 Docker 19.03.8

$ docker network ls --filter name=bridge --format "{{.ID}}"
580ef811af7b
58a53cab81bd

Those two are absolutely normal in a docker installation, and correspond to the default bridge network plus the docker_gwbridge which is the default gateway for containers allowing them to connect to the host they run in, when node is part of a Swarm.

Starting minikube doesn't work:

minikube start --driver=docker
πŸ˜„  minikube v1.10.0 on Ubuntu 18.04
✨  Using the docker driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=7900MB) ...
❗  This container is having trouble accessing https://k8s.gcr.io
πŸ’‘  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
E0513 22:14:50.386152    3767 start.go:95] Unable to get host IP: inspect IP bridge network "580ef811af7b\n58a53cab81bd".: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" 580ef811af7b
58a53cab81bd: exit status 1
stdout:


stderr:
Error: No such object: 580ef811af7b
58a53cab81bd

πŸ’£  failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "580ef811af7b\n58a53cab81bd".: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" 580ef811af7b
58a53cab81bd: exit status 1
stdout:


stderr:
Error: No such object: 580ef811af7b
58a53cab81bd


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

But the docker container is running:

$ docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                                                      NAMES
cf03dabc522b        gcr.io/k8s-minikube/kicbase:v0.0.10   "/usr/local/bin/entr…"   54 seconds ago      Up 50 seconds       127.0.0.1:32775->22/tcp, 127.0.0.1:32774->2376/tcp, 127.0.0.1:32773->5000/tcp, 127.0.0.1:32772->8443/tcp   minikube

Even if there is no Kubernetes at all in it:

$ docker top minikube
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                4223                4171                0                   22:14               ?                   00:00:00            /sbin/init
root                4504                4223                0                   22:14               ?                   00:00:01            /usr/local/bin/containerd
root                5267                4223                0                   22:14               ?                   00:00:00            /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
root                4526                4223                0                   22:14               ?                   00:00:00            /usr/sbin/sshd -D
root                4379                4223                0                   22:14               ?                   00:00:00            /lib/systemd/systemd-journald

minikube delete
minikube start --driver=docker

These commands dose not work

@srehanpk
the error tells that the minikube container doesnt exist anymore.
are you running a system with low amount of ram?
do you mind also pasting the output of :

docker ps -a | grep minikube

and also :

docker network ls

does deleting and recreating help ?
minikube delete
minikube start --driver=docker

it worth noting I just tried it on ubuntu 20.04 and same docker version and I have no issues. and our integration tests are ubuntu and debian too and they never had this problem but I really apperciate it that you made this issue, so we can find the root cause why this happens and we prevent it for others

srahmed@hp:~$ docker ps -a | grep minikube
8199c44ea6a7 gcr.io/k8s-minikube/kicbase:v0.0.10 "/usr/local/bin/entr…" About a minute ago Up 56 seconds 127.0.0.1:32807->22/tcp, 127.0.0.1:32806->2376/tcp, 127.0.0.1:32805->5000/tcp, 127.0.0.1:32804->8443/tcp minikube

when node is part of a Swarm

I think this was the untested bit, I don't think we have any tests with both Swarm and Kubernetes...

Anyway, most likely the code was meant only to hit the default "bridge" and not the docker_gwbridge
It was just an error, due to the word bridge appearing in both networks. The mentioned PR will fix.

Here is some more information about docker_gwbridge:

https://docs.docker.com/network/overlay/

When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:

an overlay network called `ingress`, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called `docker_gwbridge`, which connects the individual Docker daemon to the other daemons participating in the swarm.

I do think that Swarm must have been enabled manually ?

It's simply enabled with:
docker swarm init
on a host, in my case it's a single node Swarm.
Having a Swarm in a host makes multiple bridge network being used anyway.

this seems to be a bug and we could do a better job in finding the correct bridge

@afbjorklund I wonder if we still need to warn the user to disable swarm ? even if we detect the bridge correctly

@medyagh : no idea, I haven't tried to mix Kubernetes and Docker Swarm myself
But I thought that e.g. Docker Desktop has both of them installed, out-of-the-box

So we probably shouldn't worry about swarm, but to get the minikube code working ?
Current code breaks the same way, if you add "mybridge" or "bridge2" or anything...

Temporary workaround:

docker swarm leave --force
docker network prune

to make the node leave the Swarm and clean the unused networks, then minikube starts correctly.

Temporary workaround:

docker swarm leave --force
docker network prune

to make the node leave the Swarm and clean the unused networks, then minikube starts correctly.

Thanks a lot now minikube is perfectly running
thanks to all you guys for the support ;) ;) ;)

docker swarm leave --force
docker network prune

Finally this is the solution for my problem many thanks again to all of you .

@srehanpk you should not close this issue, minikube should detect the bridge network correctly without forcing the host to leave the Swarm

his issue, minikube should detect the bridge network correctly without forcing the host to leave the Swarm

I agree this is on the milestone for minikube v1.12.0

Very good thank you

I think we should just go with something simple for now, like hardcoding "bridge".

Since PR #8034 was closed, I'm hoping on @medyagh or @priyawadhwa

dupe of other one.. in v1.13.0

In case this helps anyone ...

The Lando development environment's network name is 'lando_bridge_network', so you also get this issue - minikube fails to start.

Temporary fix allowing both minikube and Lando to start, is to:

# Stop any running Lando containers
lando poweroff

# Delete the Lando network
docker network rm lando_bridge_network

# Start minikube
minikube start

# Recreate the Lando network etc
lando rebuild -y

# * Obviously?? ... You may get conflicts if attempting to use the same ports from multiple containers

dupe of other one.. in v1.13.0

They can't _both_ be dupes ? But ~#8516~ duplicated this one.

In case this helps anyone ...

The Lando development environment's network name is 'lando_bridge_network', so you also get this issue - minikube fails to start.

Thanks for reporting, same root cause.

Hi @afbjorklund, @medyagh I've run into this issue again. I thought it was already solved by #8034 but it was closed. So just to be sure, is nobody working on fixing this issue? If not, I can give it a go.

@srehanpk on latest minikube I tried enabling swarm on mac and I have no issues, do we still have this issue ?

With latest code, if you try this, it will fail:

docker network create testing_bridge
docker network list
NETWORK ID          NAME                DRIVER              SCOPE
ddb969f50f43        bridge              bridge              local
089b251a4b6a        host                host                local
442c7ccfda68        none                null                local
24546dec8368        testing_bridge      bridge              local
./out/minikube start --memory 2048 --cpus 2 -p testing2
πŸ˜„  [testing2] minikube v1.12.3 on Ubuntu 20.04
✨  Automatically selected the docker driver. Other choices: kvm2, virtualbox
πŸ‘  Starting control plane node testing2 in cluster testing2
πŸ”₯  Creating docker container (CPUs=2, Memory=2048MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
E0820 08:17:07.515986  314891 start.go:97] Unable to get host IP: inspect IP bridge network "ddb969f50f43\n24546dec8368".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ddb969f50f43
24546dec8368: exit status 1
stdout:


stderr:
Error: No such network: ddb969f50f43
24546dec8368

πŸ’£  failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "ddb969f50f43\n24546dec8368".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ddb969f50f43
24546dec8368: exit status 1
stdout:


stderr:
Error: No such network: ddb969f50f43
24546dec8368


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

I don't think many users will run into this, but the way minikube is filtering the networks is prone to failure, it will pick all networks containing "bridge" and mix them in a string with new line characters and then it will fail.

here I tried on ubuntu as well

jenkins@instance-1:~$ docker swarm init
Swarm initialized: current node (vik7ym1zk79kxr44egvlldfnv) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token ****

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

jenkins@instance-1:~$ docker info | grep Swa
 Swarm: active
WARNING: No swap limit support
jenkins@instance-1:~$ ./minikube-linux-amd64 start 
πŸ˜„  minikube v1.12.3 on Ubuntu 20.04
✨  Automatically selected the docker driver
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"
πŸ’—  Kubectl not found in your path
πŸ‘‰  You can use kubectl inside minikube. For more information, visit https://minikube.sigs.k8s.io/docs/handbook/kubectl/
πŸ’‘  For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

I was looking into this from this issue's perspective #8274 (that I closed in favor of this one because the underlying cause is the same).

can you replicate this issue ? @kadern0

Yes, just put you the steps to replicate the issue two comments above using the latest code. (Just create a docker network containing "bridge" before starting minikube)

@kadern0 I couldn't reproduce on mac


medya@~ $ docker network create bridge-1
241bbcbecb087d0a05188c9b910f735ceeff301d3310b816440c77d2cd6d4763
medya@~ $ minikube version
minikube version: v1.12.3
commit: 2243b4b97c131e3244c5f014faedca0d846599f5-dirty
medya@~ $ minikube delete --all
πŸ”₯  Deleting "minikube" in docker ...
πŸ”₯  Removing /Users/medya/.minikube/machines/minikube ...
πŸ’€  Removed all traces of the "minikube" cluster.
πŸ”₯  Successfully deleted all profiles
medya@~ $ minikube start
πŸ˜„  minikube v1.12.3 on Darwin 10.15.6
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2948MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube
medya@~ $ docker network create testing_bridge
bdb3812f8ab7d7d5bbcb78dbb9e5336e9456fbaf82d4af118494188b5886e8ac


medya@~ $ minikube start
πŸ˜„  minikube v1.12.3 on Darwin 10.15.6
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
❗  Your system has 16384MB memory but Docker has only 2996MB. For a better performance increase to at least 3GB.
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2948MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"


medya@~ $ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
2e2f5269fbdf        bridge              bridge              local
241bbcbecb08        bridge-1            bridge              local
6ceabcdd0b73        docker_gwbridge     bridge              local
7493d3908b3b        host                host                local
cnqffuv1zryx        ingress             overlay             swarm
32dd48b9b3da        minikube            bridge              local
de4265f5379d        none                null                local
bdb3812f8ab7        testing_bridge      bridge              local

$ minikube version
minikube version: v1.12.3
commit: 2243b4b97c131e3244c5f014faedca0d846599f5-dirty

Cleaning up and fresh start

pcaderno@pcaderno-desktop$ minikube delete --all
πŸ”₯  Deleting "minikube" in docker ...
πŸ”₯  Removing /home/pcaderno/.minikube/machines/minikube ...
πŸ’€  Removed all traces of the "minikube" cluster.
πŸ”₯  Successfully deleted all profiles
pcaderno@pcaderno-desktop$ minikube start 
πŸ˜„  minikube v1.12.3 on Ubuntu 19.10
✨  Automatically selected the docker driver
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=7800MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"

All good, now let's break it

pcaderno@pcaderno-desktop$ minikube stop
βœ‹  Stopping node "minikube"  ...
πŸ›‘  Powering off "minikube" via SSH ...
πŸ›‘  1 nodes stopped.
pcaderno@pcaderno-desktop$ docker network create testing_bridge
8622100c89c5e6ed0f4fb957a9117a3bfe85f53aac5de40ec7e74b4c9af9b8ac



md5-e70befd9f5c005ce465c56b8db4c0226



pcaderno@pcaderno-desktop$ minikube start
πŸ˜„  minikube v1.12.3 on Ubuntu 19.10
✨  Using the docker driver based on existing profile
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”„  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
E0821 09:11:33.527143    4903 start.go:97] Unable to get host IP: inspect IP bridge network "f631e95f4100\n8622100c89c5".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f631e95f4100
8622100c89c5: exit status 1
stdout:


stderr:
Error: No such network: f631e95f4100
8622100c89c5

πŸ’£  failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "f631e95f4100\n8622100c89c5".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f631e95f4100
8622100c89c5: exit status 1
stdout:


stderr:
Error: No such network: f631e95f4100
8622100c89c5


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose



md5-d6d0f064f0a408a4bd54347a7586d013



pcaderno@pcaderno-desktop$ docker network ls
NETWORK ID          NAME                 DRIVER              SCOPE
f631e95f4100        bridge               bridge              local
meiftgx9pkqf        devdocs-server-net   overlay             swarm
db34ce28a755        host                 host                local
9mxl3y16ro30        ingress              overlay             swarm
e945b60808c9        none                 null                local
8622100c89c5        testing_bridge       bridge              local



md5-db319c58940eca49252fad6f745638b4



pcaderno@pcaderno-desktop$ docker network rm testing_bridge
testing_bridge
pcaderno@pcaderno-desktop$ minikube start
πŸ˜„  minikube v1.12.3 on Ubuntu 19.10
✨  Using the docker driver based on existing profile
πŸ‘  Starting control plane node minikube in cluster minikube
πŸƒ  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"

It works fine again. Do you have a linux box available for testing?

Hi,

I think I am hit by this issue as well. I am on Gentoo distro. I think the docker_gwbridge on my system is created by docker, so I don't think I can just delete it. Is there's a walkaround/fix to this issue?

NETWORK ID          NAME                DRIVER              SCOPE
83eca32b5df3        bridge              bridge              local
0f6b47c0882d        docker_gwbridge     bridge              local
21d9995cf792        host                host                local
tgv0ut9u3sgw        ingress             overlay             swarm
8ab892bf9aac        none                null                local
Client:
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.14.7
 Git commit:        48a66213fe
 Built:             Sat Aug  8 06:09:21 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.14.7
  Git commit:       48a66213fe
  Built:            Sat Aug  8 06:08:46 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        35bd7a5f69c13e1563af8a93431411cd9ecf5021
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683b971d9c3ef73f284f176672c44b448662
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd

@davidshen84 the PR that will close this issue is this one -> https://github.com/kubernetes/minikube/pull/9094 if you need a quick temporary fix, you can always use my code from here -> https://github.com/kubernetes/minikube/pull/9062

Thanks Pablo!

On Mon, Aug 31, 2020, 08:50 Pablo Caderno notifications@github.com wrote:

@davidshen84 https://github.com/davidshen84 the PR that will close this
issue is this one -> #9094
https://github.com/kubernetes/minikube/pull/9094 if you need a quick
temporary fix, you can always use my code from here -> #9062
https://github.com/kubernetes/minikube/pull/9062

β€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/minikube/issues/8131#issuecomment-683480617,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAQBTOTYXIVORWK3O7W6UTSDLJTDANCNFSM4M76RZLQ
.

Fixed by #9062

I'm going to leave this open until the v1.13 release as so many people are running into it. As a workaround until the release later this week, you should be able to use minikube from head (or delete the bridge network):

https://storage.googleapis.com/minikube-builds/9160/minikube-darwin-amd64
https://storage.googleapis.com/minikube-builds/9160/minikube-linux-amd64
https://storage.googleapis.com/minikube-builds/9160/minikube-windows-amd64.exe

If this updated build does not help, please let us know immediately so that we can fix it before the release.

Upgrading to v1.13.0 fixed the issue for me (was using 1.12.3).

Thanks!

@RoSk0 - thank you for the confirmation that minikube v1.13.0 fixes the issue!

Upgrading to v1.13.0 fixed the issue for me (was using 1.12.3).

Thanks!

This works !!! I had docker swarm initialized and even when I disabled it followed by deleting of .docker/config.json file, I kept encountering the same error. Upgrading minikube works. Thanks @RoSk0

Was this page helpful?
0 / 5 - 0 ratings