Minikube: Minikube fails to start if multiple docker networks' names contain "bridge"

Created on 26 May 2020  ยท  7Comments  ยท  Source: kubernetes/minikube


Steps to reproduce the issue:

  1. Run
 ./out/minikube start
๐Ÿ˜„  minikube v1.10.1 on Ubuntu 20.04
โœจ  Using the docker driver based on user configuration
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=3900MB) ...
๐Ÿณ  Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
E0526 19:22:02.868188   12550 start.go:95] Unable to get host IP: inspect IP bridge network "ba7e0589cc69\n71a6519a81a0".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0: exit status 1
stdout:


stderr:
Error: No such network: ba7e0589cc69
71a6519a81a0

๐Ÿ’ฃ  failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "ba7e0589cc69\n71a6519a81a0".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0: exit status 1
stdout:


stderr:
Error: No such network: ba7e0589cc69
71a6519a81a0


๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose


Full output of failed command:

./out/minikube start  --alsologtostderr -v=4
I0526 19:25:50.531776   14404 start.go:98] hostinfo: {"hostname":"pcaderno-ThinkPad-X1-Yoga-4th","uptime":3606,"bootTime":1590481544,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-31-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"b395f527-45ca-4e42-9897-7692ac10a941"}
I0526 19:25:50.532500   14404 start.go:108] virtualization: kvm host
๐Ÿ˜„  minikube v1.10.1 on Ubuntu 20.04
I0526 19:25:50.534356   14404 notify.go:125] Checking for updates...
I0526 19:25:50.534587   14404 driver.go:253] Setting default libvirt URI to qemu:///system
I0526 19:25:50.586999   14404 docker.go:95] docker version: linux-19.03.8
โœจ  Using the docker driver based on existing profile
I0526 19:25:50.587854   14404 start.go:214] selected driver: docker
I0526 19:25:50.587863   14404 start.go:594] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0526 19:25:50.587957   14404 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0526 19:25:50.587969   14404 start.go:918] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0526 19:25:50.588819   14404 cache.go:105] Beginning downloading kic artifacts for docker with docker
I0526 19:25:50.633155   14404 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0526 19:25:50.633192   14404 preload.go:95] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0526 19:25:50.633230   14404 preload.go:103] Found local preload: /home/pcaderno/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0526 19:25:50.633238   14404 cache.go:49] Caching tarball of preloaded images
I0526 19:25:50.633249   14404 preload.go:129] Found /home/pcaderno/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0526 19:25:50.633255   14404 cache.go:52] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0526 19:25:50.633351   14404 profile.go:156] Saving config to /home/pcaderno/.minikube/profiles/minikube/config.json ...
I0526 19:25:50.633551   14404 cache.go:148] Successfully downloaded all kic artifacts
I0526 19:25:50.633574   14404 start.go:241] acquiring machines lock for minikube: {Name:mkeb9775c0f565ca913af1fa4d4cd4e86e587234 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0526 19:25:50.633718   14404 start.go:245] acquired machines lock for "minikube" in 122.926ยตs
I0526 19:25:50.633737   14404 start.go:88] Skipping create...Using existing machine configuration
I0526 19:25:50.633746   14404 fix.go:53] fixHost starting: 
I0526 19:25:50.633980   14404 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0526 19:25:50.673036   14404 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0526 19:25:50.673072   14404 fix.go:131] unexpected machine state, will restart: <nil>
๐Ÿƒ  Updating the running docker "minikube" container ...
I0526 19:25:50.674978   14404 machine.go:88] provisioning docker machine ...
I0526 19:25:50.674995   14404 ubuntu.go:166] provisioning hostname "minikube"
I0526 19:25:50.675030   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:50.714041   14404 main.go:110] libmachine: Using SSH client type: native
I0526 19:25:50.714196   14404 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf970] 0x7bf940 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0526 19:25:50.714215   14404 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0526 19:25:50.865254   14404 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0526 19:25:50.865312   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:50.899045   14404 main.go:110] libmachine: Using SSH client type: native
I0526 19:25:50.899188   14404 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf970] 0x7bf940 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0526 19:25:50.899211   14404 main.go:110] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I0526 19:25:51.046928   14404 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0526 19:25:51.047022   14404 ubuntu.go:172] set auth options {CertDir:/home/pcaderno/.minikube CaCertPath:/home/pcaderno/.minikube/certs/ca.pem CaPrivateKeyPath:/home/pcaderno/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/pcaderno/.minikube/machines/server.pem ServerKeyPath:/home/pcaderno/.minikube/machines/server-key.pem ClientKeyPath:/home/pcaderno/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/pcaderno/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/pcaderno/.minikube}
I0526 19:25:51.047108   14404 ubuntu.go:174] setting up certificates
I0526 19:25:51.047141   14404 provision.go:82] configureAuth start
I0526 19:25:51.047287   14404 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0526 19:25:51.118437   14404 provision.go:131] copyHostCerts
I0526 19:25:51.118471   14404 vm_assets.go:95] NewFileAsset: /home/pcaderno/.minikube/certs/key.pem -> /home/pcaderno/.minikube/key.pem
I0526 19:25:51.118500   14404 exec_runner.go:91] found /home/pcaderno/.minikube/key.pem, removing ...
I0526 19:25:51.118556   14404 exec_runner.go:98] cp: /home/pcaderno/.minikube/certs/key.pem --> /home/pcaderno/.minikube/key.pem (1679 bytes)
I0526 19:25:51.118618   14404 vm_assets.go:95] NewFileAsset: /home/pcaderno/.minikube/certs/ca.pem -> /home/pcaderno/.minikube/ca.pem
I0526 19:25:51.118636   14404 exec_runner.go:91] found /home/pcaderno/.minikube/ca.pem, removing ...
I0526 19:25:51.118666   14404 exec_runner.go:98] cp: /home/pcaderno/.minikube/certs/ca.pem --> /home/pcaderno/.minikube/ca.pem (1042 bytes)
I0526 19:25:51.118709   14404 vm_assets.go:95] NewFileAsset: /home/pcaderno/.minikube/certs/cert.pem -> /home/pcaderno/.minikube/cert.pem
I0526 19:25:51.118727   14404 exec_runner.go:91] found /home/pcaderno/.minikube/cert.pem, removing ...
I0526 19:25:51.118752   14404 exec_runner.go:98] cp: /home/pcaderno/.minikube/certs/cert.pem --> /home/pcaderno/.minikube/cert.pem (1082 bytes)
I0526 19:25:51.118794   14404 provision.go:105] generating server cert: /home/pcaderno/.minikube/machines/server.pem ca-key=/home/pcaderno/.minikube/certs/ca.pem private-key=/home/pcaderno/.minikube/certs/ca-key.pem org=pcaderno.minikube san=[172.17.0.4 localhost 127.0.0.1]
I0526 19:25:51.246111   14404 provision.go:159] copyRemoteCerts
I0526 19:25:51.246144   14404 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0526 19:25:51.246165   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:51.275187   14404 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/pcaderno/.minikube/machines/minikube/id_rsa Username:docker}
I0526 19:25:51.368075   14404 vm_assets.go:95] NewFileAsset: /home/pcaderno/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0526 19:25:51.368211   14404 ssh_runner.go:215] scp /home/pcaderno/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1042 bytes)
I0526 19:25:51.417528   14404 vm_assets.go:95] NewFileAsset: /home/pcaderno/.minikube/machines/server.pem -> /etc/docker/server.pem
I0526 19:25:51.417669   14404 ssh_runner.go:215] scp /home/pcaderno/.minikube/machines/server.pem --> /etc/docker/server.pem (1123 bytes)
I0526 19:25:51.457306   14404 vm_assets.go:95] NewFileAsset: /home/pcaderno/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0526 19:25:51.457402   14404 ssh_runner.go:215] scp /home/pcaderno/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0526 19:25:51.494063   14404 provision.go:85] duration metric: configureAuth took 446.883409ms
I0526 19:25:51.494106   14404 ubuntu.go:190] setting minikube options for container-runtime
I0526 19:25:51.494420   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:51.545834   14404 main.go:110] libmachine: Using SSH client type: native
I0526 19:25:51.545999   14404 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf970] 0x7bf940 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0526 19:25:51.546014   14404 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0526 19:25:51.677139   14404 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0526 19:25:51.677161   14404 ubuntu.go:71] root file system type: overlay
I0526 19:25:51.677282   14404 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0526 19:25:51.677321   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:51.713557   14404 main.go:110] libmachine: Using SSH client type: native
I0526 19:25:51.713705   14404 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf970] 0x7bf940 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0526 19:25:51.713779   14404 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0526 19:25:51.863317   14404 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0526 19:25:51.863667   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:51.931475   14404 main.go:110] libmachine: Using SSH client type: native
I0526 19:25:51.931617   14404 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf970] 0x7bf940 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0526 19:25:51.931640   14404 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0526 19:25:52.092297   14404 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0526 19:25:52.092380   14404 machine.go:91] provisioned docker machine in 1.417384498s
I0526 19:25:52.092417   14404 start.go:204] post-start starting for "minikube" (driver="docker")
I0526 19:25:52.092450   14404 start.go:214] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0526 19:25:52.092578   14404 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0526 19:25:52.092678   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:52.161768   14404 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/pcaderno/.minikube/machines/minikube/id_rsa Username:docker}
I0526 19:25:52.252482   14404 ssh_runner.go:148] Run: cat /etc/os-release
I0526 19:25:52.259618   14404 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0526 19:25:52.259701   14404 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0526 19:25:52.259757   14404 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0526 19:25:52.259789   14404 info.go:96] Remote host: Ubuntu 19.10
I0526 19:25:52.259832   14404 filesync.go:118] Scanning /home/pcaderno/.minikube/addons for local assets ...
I0526 19:25:52.259963   14404 filesync.go:118] Scanning /home/pcaderno/.minikube/files for local assets ...
I0526 19:25:52.260047   14404 start.go:207] post-start completed in 167.59653ms
I0526 19:25:52.260082   14404 fix.go:55] fixHost completed within 1.626331587s
I0526 19:25:52.260114   14404 start.go:75] releasing machines lock for "minikube", held for 1.626375964s
I0526 19:25:52.260270   14404 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0526 19:25:52.325749   14404 ssh_runner.go:148] Run: systemctl --version
I0526 19:25:52.325801   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:52.325810   14404 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0526 19:25:52.325880   14404 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0526 19:25:52.363645   14404 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/pcaderno/.minikube/machines/minikube/id_rsa Username:docker}
I0526 19:25:52.365695   14404 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/pcaderno/.minikube/machines/minikube/id_rsa Username:docker}
I0526 19:25:52.452567   14404 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0526 19:25:52.478381   14404 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0526 19:25:52.478546   14404 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0526 19:25:52.500626   14404 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0526 19:25:52.621228   14404 ssh_runner.go:148] Run: sudo systemctl start docker
I0526 19:25:52.631316   14404 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
๐Ÿณ  Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
I0526 19:25:52.719827   14404 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0526 19:25:52.788917   14404 cli_runner.go:108] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0
E0526 19:25:52.828300   14404 start.go:95] Unable to get host IP: inspect IP bridge network "ba7e0589cc69\n71a6519a81a0".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0: exit status 1
stdout:


stderr:
Error: No such network: ba7e0589cc69
71a6519a81a0
I0526 19:25:52.828558   14404 exit.go:58] WithError(failed to start node)=startup failed: Failed to setup kubeconfig: inspect IP bridge network "ba7e0589cc69\n71a6519a81a0".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0: exit status 1
stdout:


stderr:
Error: No such network: ba7e0589cc69
71a6519a81a0
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
    /usr/lib/go-1.13/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1adde8f, 0x14, 0x1da1200, 0xc000146220)
    /home/pcaderno/go/src/github.com/kadern0/minikube3/minikube/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2af7100, 0xc0004775e0, 0x0, 0x2)
    /home/pcaderno/go/src/github.com/kadern0/minikube3/minikube/cmd/minikube/cmd/start.go:203 +0x7f7
github.com/spf13/cobra.(*Command).execute(0x2af7100, 0xc0004775c0, 0x2, 0x2, 0x2af7100, 0xc0004775c0)
    /home/pcaderno/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2af6140, 0x0, 0x1, 0xc000257e60)
    /home/pcaderno/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
    /home/pcaderno/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
    /home/pcaderno/go/src/github.com/kadern0/minikube3/minikube/cmd/minikube/cmd/root.go:112 +0x747
main.main()
    /home/pcaderno/go/src/github.com/kadern0/minikube3/minikube/cmd/minikube/main.go:66 +0xea
W0526 19:25:52.828713   14404 out.go:201] failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "ba7e0589cc69\n71a6519a81a0".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0: exit status 1
stdout:


stderr:
Error: No such network: ba7e0589cc69
71a6519a81a0

๐Ÿ’ฃ  failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "ba7e0589cc69\n71a6519a81a0".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" ba7e0589cc69
71a6519a81a0: exit status 1
stdout:


stderr:
Error: No such network: ba7e0589cc69
71a6519a81a0


๐Ÿ˜ฟ  minikube is exiting due to an error. If the above message is not useful, open an issue:
๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

I've been looking at the error and then at the code and I've found the cause. These are the docker networks on my system:

docker network ls
NETWORK ID          NAME                                       DRIVER              SCOPE
ba7e0589cc69        bridge                                     bridge              local
71a6519a81a0        docker_gwbridge                            bridge              local

So the problem resides within this function: https://github.com/kubernetes/minikube/blob/ad437c2c9ca930629ab4c7c077a27d94b73ccf2e/pkg/drivers/kic/oci/network.go#L63

If you filter the networks the same way this function does, both networks are returned:

docker network ls --filter name=bridge --format "{{.ID}}"
ba7e0589cc69
71a6519a81a0

This makes this assignent contain "ba7e0589cc69\n71a6519a81a0". as shown within the error output before:

bridgeID := strings.TrimSpace(rr.Stdout.String())

cdocker-driver kinbug triagduplicate

Most helpful comment

I made sure docker swarm was deactivated with: docker swarm leave --force
then did: docker network prune (this removes the docker swarm network since its not being used although beware it will remove all other unused networks too)
After this minikube ran OK

All 7 comments

This is a duplicate of #8131, even if it is not the swarm bridge but a user-created bridge.

Should be fixed by PR #8034

Thanks @afbjorklund, it is the same issue.

@kadern0
cool I will close this in favor of https://github.com/kubernetes/minikube/issues/8274

I made sure docker swarm was deactivated with: docker swarm leave --force
then did: docker network prune (this removes the docker swarm network since its not being used although beware it will remove all other unused networks too)
After this minikube ran OK

@markymo5115 I wonder if you have still this issue with latest version ?

I also just installed minikube to encounter this problem. It happened as my installed docker instance was still in swarm mode. The forced leave of the swarm and the pruning does the job though.

simple solution to problem is.

  1. docker leave swarm
  2. docker network prune
Was this page helpful?
0 / 5 - 0 ratings