Steps to reproduce the issue:
ashoknn@ashoknn-yoga:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ashoknn@ashoknn-yoga:~$
β 'docker' driver reported an issue: exit status 1
π‘ Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
I0404 19:52:48.590447 14510 start.go:1100] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0404 19:52:48.726483 14510 start.go:1004] Using suggested 4000MB memory alloc based on sys=16180MB, container=0MB
I0404 19:52:48.727148 14510 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
π Starting control plane node m01 in cluster minikube
π Pulling base image ...
I0404 19:52:48.729744 14510 cache.go:104] Beginning downloading kic artifacts
I0404 19:52:48.730338 14510 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0404 19:52:48.730964 14510 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0404 19:52:48.731451 14510 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0404 19:52:48.731233 14510 preload.go:97] Found local preload: /home/ashoknn/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0404 19:52:48.732895 14510 cache.go:46] Caching tarball of preloaded images
I0404 19:52:48.734157 14510 preload.go:123] Found /home/ashoknn/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0404 19:52:48.734763 14510 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker
I0404 19:52:48.737874 14510 profile.go:138] Saving config to /home/ashoknn/.minikube/profiles/minikube/config.json ...
E0404 19:52:49.383329 14510 cache.go:114] Error downloading kic artifacts: error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
```
Full output of failed command:
Full output of minikube start command used, if not already included:
Optional: Full output of minikube logs command:
π£ Unable to get machine status: state: "docker inspect -f {{.State.Status}} minikube" failed: exit status 1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
πΏ minikube is exiting due to an error. If the above message is not useful, open an issue:
π https://github.com/kubernetes/minikube/issues/new/choose
that is correct, we currently only use the Docker without using respecing the DOCKER_HOST env var (mostly because of the complexity of handling minikube docker-env)
@nnashok do you mind sharing your DOCKER_HOST env ?
excuse me for my ignorance on windows and WSL2, but is this a problem that anyone runnning it on WSL would have (will they all have a custom DOCKER_HOST) ? or do you happen to have a custom DOCKER_HOST because of your own company or settings ?
alternatively, I am curious is it possible to install docker inside WSL so you don't have to use the Docker outside the WSL vm ?
@nnashok do you mind sharing your DOCKER_HOST env ?
It's set to
tcp://0.0.0.0:2375which is standard if you want to access Docker from within a WSL shell.
excuse me for my ignorance on windows and WSL2, but is this a problem that anyone runnning it on WSL would have (will they all have a custom DOCKER_HOST) ? or do you happen to have a custom DOCKER_HOST because of your own company or settings ?
This is a standard setting, not specific to myself or my company.
alternatively, I am curious is it possible to install docker inside WSL so you don't have to use the Docker outside the WSL vm ?
I have not tried to install Docker daemon within my WSL. With the way i have setup, I can use Docker for Windows and use the same daemon to run my containers from Windows cmd or from my Ubuntu's bash, both showing the same state (images, containers etc).
This is not yet a priority for the minikube project, but if someone is interested in making this work, I suggest starting by commenting out:
It may be possible that things just work if this method was updated to only overwrite DOCKER_HOST if it points to 127.0.0.1.
I'm interested in fixing this. I'll create the PR etc, will have to wait for clearance from my company though.
@nnashok are you still interested in doing this PR ?
@medyagh @nnashok: I'm also very interested in this PR/a fix.
My team and I are currently looking for a way to launch minikube instances on Kubernetes pods using the DinD as a sidecar method. We're having issues getting minikube running in one pod container to communicate with a Docker daemon running in another pod container (and listening at 0.0.0.0:2375).
Seems as though the DOCKER_HOST envvar is not being respected by minikube nor is the --docker-opt[=-H tcp://localhost:2375] cmd-flag.
Note that docker ... commands run fine.
btw - operating on Linux/Fedora32/Centos7
Yes, I'm still interested.
On Tuesday, May 12, 2020, 9:25:59 PM PDT, 0xO1 <[email protected]> wrote:
@medyagh @nnashok: I'm also very interested in this PR/a fix.
My team and I are currently looking for a way to launch minikube instances on Kubernetes pods using the DinD as a sidecar method. We're having issues getting minikube running in one pod container to communicate with a Docker daemon running in another pod container (and listening at 0.0.0.0:2375).
Seems as though the DOCKER_HOST envvar is not being respected by minikube nor is the --docker-opt[=-H tcp://localhost:2375] cmd-flag.
Note that docker ... commands run fine.
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
As long as your external docker daemon is able to run privileged containers, it "should" work. The URLs will all be 127.0.0.1, but should not be an issue as long as running on the same host.
Like Thomas said, probably we should look for DOCKER_HOST=tcp://localhost:2375 as a special case (with variants 127.0.0.1 and 2376) and not wipe the DOCKER_HOST in those cases.
Or we could just leave the variables as they were, if they were not originally set by minikube ?
Currently it wipes all of them - always, which is why it is not respecting the user settings anymore.
// DockerHostEnv is used for docker daemon settings
DockerHostEnv = "DOCKER_HOST"
// DockerCertPathEnv is used for docker daemon settings
DockerCertPathEnv = "DOCKER_CERT_PATH"
// DockerTLSVerifyEnv is used for docker daemon settings
DockerTLSVerifyEnv = "DOCKER_TLS_VERIFY"
```go
// DockerDaemonEnvs is list of docker-daemon related environment variables.
DockerDaemonEnvs = [3]string{DockerHostEnv, DockerTLSVerifyEnv, DockerCertPathEnv}
The original idea was to not talk to the docker-in-docker in minikube, but to the host:
```go
// Universally ensure that we never speak to the wrong DOCKER_HOST
if err := oci.PointToHostDockerDaemon(); err != nil {
glog.Errorf("oci env: %v", err)
}
Not to reset the user's docker settings, of using tcp: rather than the default unix:
Eventually we will need to fix the scenario where docker / podman is not running on localhost.
I don't really mean remotely, but where the VM has a _different_ IP. Currently we expect tunneling.
That is, if you publish a port in the docker daemon we expect it to be available from 127.0.0.1.
Even if it is running in a virtual machine. But it should be true for Docker Desktop and Kubernetes.
Just not for Podman: #8003
@0x0I : you might want to open a separate issue, about your minikube in Kubernetes use case.
It would be nice to document it, next to the minikube in CI scenario:
https://minikube.sigs.k8s.io/docs/tutorials/continuous_integration/
PR added: 4 bytes... (moved two brackets) - and some go fmt.
The user will still be disappointed if their docker or podman environment is not running on localhost (like WSL or pod), but didn't add any code to validate that they only use e.g. tcp://localhost:2375
I suppose you _could_ also set up your own tunneling, but it gets a bit tedious since minikube uses random ports. So ultimately it will need to tunnel those ports too, just like it tunnels k8s for docker.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a77bb53b10a3 gcr.io/k8s-minikube/kicbase:v0.0.10 "/usr/local/bin/entrβ¦" About a minute ago Up About a minute 127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp minikube
But for these scenarios (i.e. WSL and pod), then I think it is accessible from "localhost" since it shares some kind of networking bridge between them. In the case of WSL2, I guess it's a VM ?
https://docs.microsoft.com/en-us/windows/wsl/compare-versions#accessing-network-applications
https://kubernetes.io/docs/concepts/cluster-administration/networking/
I was trying to download the binaries from this PR to test locally but couldn't find a link. Do I need to build it locally? I thought every PR's binaries could be downloaded.
Like Thomas said, probably we should look for
DOCKER_HOST=tcp://localhost:2375as a special case (with variants 127.0.0.1 and 2376) and not wipe the DOCKER_HOST in those cases.Or we could just leave the variables as they were, if they were not originally set by minikube ?
Currently it wipes all of them - always, which is why it is not respecting the user settings anymore.
@0x0I : you might want to open a separate issue, about your minikube in Kubernetes use case.
It would be nice to document it, next to the minikube in CI scenario:
https://minikube.sigs.k8s.io/docs/tutorials/continuous_integration/
Sounds good and will do @afbjorklund.
@nnashok - Here are the binaries for that PR:
https://storage.googleapis.com/minikube-builds/8164/docker-machine-driver-kvm2-amd64
https://storage.googleapis.com/minikube-builds/8164/e2e-darwin-amd64
https://storage.googleapis.com/minikube-builds/8164/e2e-linux-amd64
https://storage.googleapis.com/minikube-builds/8164/e2e-windows-amd64
https://storage.googleapis.com/minikube-builds/8164/e2e-windows-amd64.exe
https://storage.googleapis.com/minikube-builds/8164/minikube-darwin-amd64
https://storage.googleapis.com/minikube-builds/8164/minikube-darwin-amd64.tar.gz
https://storage.googleapis.com/minikube-builds/8164/minikube-linux-amd64
https://storage.googleapis.com/minikube-builds/8164/minikube-linux-amd64.tar.gz
https://storage.googleapis.com/minikube-builds/8164/minikube-windows-amd64
https://storage.googleapis.com/minikube-builds/8164/minikube-windows-amd64.exe
https://storage.googleapis.com/minikube-builds/8164/minikube-windows-amd64.tar.gz
Generated by gsutil ls -la 'gs://minikube-builds/8164' | grep amd64 | perl -pe 's#gs://#https://storage.googleapis.com/#g' | awk '{ print $3 }' | cut -d"#" -f1
Thanks @tstromberg . I tried to use the minikube-linux-amd64.tar.gz from my Ubuntu (running over WSL) with minikube start --driver=docker (minikube -> minikube-linux-amd64). I didn't know what to do with the docker-machine-driver-kvm2 included in the tarball. Please let me know if the command I used was incorrect.
Observations:
ashoknn@ashoknn-yoga:~/temp/minikube-7420/out$ minikube start --driver=docker
π minikube v1.10.1 on Ubuntu 18.04
β¨ Using the docker driver based on user configuration
π minikube 1.11.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.11.0
π‘ To disable this notice, run: 'minikube config set WantUpdateNotification false'
π Starting control plane node minikube in cluster minikube
πΎ Downloading Kubernetes v1.18.2 preload ...
> preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4: 525.43 MiB
π₯ Creating docker container (CPUs=2, Memory=4000MB) ...
π³ Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
βͺ kubeadm.pod-network-cidr=10.244.0.0/16
π Verifying Kubernetes components...
β Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses: dial tcp 172.17.0.2:8443: connect: connection refused]
π Enabled addons: default-storageclass, storage-provisioner
π£ failed to start node: startup failed: Wait failed: node pressure: list nodes: Get https://172.17.0.2:8443/api/v1/nodes: dial tcp 172.17.0.2:8443: connect: connection refused
πΏ minikube is exiting due to an error. If the above message is not useful, open an issue:
π https://github.com/kubernetes/minikube/issues/new/choose
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/out
ashoknn@ashoknn-yoga:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3b23b009d69 gcr.io/k8s-minikube/kicbase:v0.0.10 "/usr/local/bin/entrβ¦" 25 minutes ago Up 25 minutes 127.0.0.1:32775->22/tcp, 127.0.0.1:32774->2376/tcp, 127.0.0.1:32773->5000/tcp, 127.0.0.1:32772->8443/tcp minikube
ashoknn@ashoknn-yoga:~$ curl https://127.0.0.1:32772/apis/storage.k8s.io/v1/storageclasses -k
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "storageclasses.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope",
"reason": "Forbidden",
"details": {
"group": "storage.k8s.io",
"kind": "storageclasses"
},
"code": 403
}
(base) ashoknn@ashoknn-yoga:~$
Thanks @tstromberg . I tried to use the minikube-linux-amd64.tar.gz from my Ubuntu (running over WSL) with
minikube start --driver=docker(minikube -> minikube-linux-amd64). I didn't know what to do with thedocker-machine-driver-kvm2included in the tarball. Please let me know if the command I used was incorrect.Observations:
- The Kubernetes node was created as a container in the Docker daemon running on host (Docker For Windows).
ashoknn@ashoknn-yoga:~/temp/minikube-7420/out$ minikube start --driver=docker π minikube v1.10.1 on Ubuntu 18.04 β¨ Using the docker driver based on user configuration π minikube 1.11.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.11.0 π‘ To disable this notice, run: 'minikube config set WantUpdateNotification false' π Starting control plane node minikube in cluster minikube πΎ Downloading Kubernetes v1.18.2 preload ... > preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4: 525.43 MiB π₯ Creating docker container (CPUs=2, Memory=4000MB) ... π³ Preparing Kubernetes v1.18.2 on Docker 19.03.2 ... βͺ kubeadm.pod-network-cidr=10.244.0.0/16 π Verifying Kubernetes components... β Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses: dial tcp 172.17.0.2:8443: connect: connection refused] π Enabled addons: default-storageclass, storage-provisioner π£ failed to start node: startup failed: Wait failed: node pressure: list nodes: Get https://172.17.0.2:8443/api/v1/nodes: dial tcp 172.17.0.2:8443: connect: connection refused πΏ minikube is exiting due to an error. If the above message is not useful, open an issue: π https://github.com/kubernetes/minikube/issues/new/choose (base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/out
- However, the minikube tries to reach API server over the internal IP address/port (172.17.0.2:8443) instead of the exposed port (127.0.0.1:32772) shown in the logs above. The correct URL to use would be:
ashoknn@ashoknn-yoga:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d3b23b009d69 gcr.io/k8s-minikube/kicbase:v0.0.10 "/usr/local/bin/entrβ¦" 25 minutes ago Up 25 minutes 127.0.0.1:32775->22/tcp, 127.0.0.1:32774->2376/tcp, 127.0.0.1:32773->5000/tcp, 127.0.0.1:32772->8443/tcp minikube ashoknn@ashoknn-yoga:~$ curl https://127.0.0.1:32772/apis/storage.k8s.io/v1/storageclasses -k { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "storageclasses.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope", "reason": "Forbidden", "details": { "group": "storage.k8s.io", "kind": "storageclasses" }, "code": 403 } (base) ashoknn@ashoknn-yoga:~$
seems like if use the 127.0.0.1 IP in WSL that would work, the question is how can we detect we are running in WSL ? I have not used WSL myself, when u are in WSL do u use the minikube linux binary ?
is there any thing that we can detect we are in WSL ?
related https://github.com/microsoft/WSL/issues/423
var isWindowsLikeFilesystem = function () {
return process.platform === "win32" ||
(os.release().indexOf("Microsoft") > -1);
};
@nnashok
I just made a PR for this do you mind trying the binary in this PR and see if that fixes the problem?
here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ?
http://storage.googleapis.com/minikube-builds/8368/minikube-linux-amd64
It's set to tcp://0.0.0.0:2375 which is standard if you want to access Docker from within a WSL shell.
I should mention that this is not the case for me - my DOCKER_HOST env is empty, and docker runs just fine here on my WSL2 Ubuntu 20.04 setup (same as in https://github.com/kubernetes/minikube/issues/5392#issuecomment-629778839):
About Docker screen

docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:25:46 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
$DOCKER_HOST
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67d87a46bdf5 jaegertracing/all-in-one:1.8 "/go/bin/all-in-one-β¦" 9 seconds ago Up 8 seconds 0.0.0.0:5775->5775/udp, 0.0.0.0:5778->5778/tcp, 0.0.0.0:9411->9411/tcp, 0.0.0.0:14268->14268/tcp, 0.0.0.0:6831-6832->6831-6832/udp, 0.0.0.0:16686->16686/tcp jaeger
docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
EDIT: More than that, trying to set DOCKER_HOST to the proposed value results in error:
β― DOCKER_HOST=tcp://0.0.0.0:2375 docker ps
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?
Same happens with localhost and 127.0.0.1.
P.S. Also added more details about the setup.
Having said that, I've just tried the Linux binary from https://github.com/kubernetes/minikube/issues/7420#issuecomment-638477659 and it worked!
./minikube-linux-amd64 start --vm-driver=docker --disk-size 30g --memory 3072 --cpus 2 --kubernetes-version v1.14.10 --extra-config=kube-proxy.IPTables.SyncPeriod.Duration=5000000000 --extra-config=kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000
π minikube v1.11.0 on Ubuntu 20.04
β¨ Using the docker driver based on user configuration
π Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3
π Starting control plane node minikube in cluster minikube
π₯ Creating docker container (CPUs=2, Memory=3072MB) ...
π³ Preparing Kubernetes v1.14.10 on Docker 19.03.2 ...
βͺ kube-proxy.IPTables.SyncPeriod.Duration=5000000000
βͺ kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000
βͺ kubeadm.pod-network-cidr=10.244.0.0/16
π Verifying Kubernetes components...
π Enabled addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "minikube"
Having said that, I've just tried the Linux binary from #7420 (comment) and it worked!
./minikube-linux-amd64 start --vm-driver=docker --disk-size 30g --memory 3072 --cpus 2 --kubernetes-version v1.14.10 --extra-config=kube-proxy.IPTables.SyncPeriod.Duration=5000000000 --extra-config=kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000 π minikube v1.11.0 on Ubuntu 20.04 β¨ Using the docker driver based on user configuration π Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3 π Starting control plane node minikube in cluster minikube π₯ Creating docker container (CPUs=2, Memory=3072MB) ... π³ Preparing Kubernetes v1.14.10 on Docker 19.03.2 ... βͺ kube-proxy.IPTables.SyncPeriod.Duration=5000000000 βͺ kube-proxy.IPTables.MinSyncPeriod.Duration=3000000000 βͺ kubeadm.pod-network-cidr=10.244.0.0/16 π Verifying Kubernetes components... π Enabled addons: default-storageclass, storage-provisioner π Done! kubectl is now configured to use "minikube"
Excellent , glad to see that worked for you, it will be included in the next release, we plan to have a beta release in two weeks
Re-opening because I don't think the fix for supporting an external DOCKER_HOST has been merged yet? It isn't clear if #8368 and #8164 are complimentary or contradictory. Please feel free to close if #8164 is obsolete.
I can confirm that this build _does_ fix the DOCKER_HOST uptake. Because it conects from my WSL-> to my Docker Desktop on tcp://localhost:2375
BUT the problem is there may be other issues in the same build, such as the one where I'm having a /preloaded.tar directory problem..
kic.go:137] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /home/salsa/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: exit status 2
stderr:
tar (child): /preloaded.tar: Cannot read: Is a directory
tar (child): At beginning of tape, quitting now
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
Sadly, this did not fix the issue on my setup:
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/second/out$ which minikube
/home/ashoknn/temp/minikube-7420/second/out/minikube
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/second/out$ minikube start --driver=docker
π minikube v1.11.0 on Ubuntu 18.04
β¨ Using the docker driver based on existing profile
β 'docker' driver reported an issue: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
π‘ Suggestion: Start the Docker service
π Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
π£ Failed to validate 'docker' driver
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/second/out$ echo $DOCKER_HOST
tcp://0.0.0.0:2375
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/second/out$
I changed DOCKER_HOST to use 127.0.0.1, but same result:
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/second/out$ export DOCKER_HOST=tcp://127.0.0.1:2375
(base) ashoknn@ashoknn-yoga:~/temp/minikube-7420/second/out$ minikube start --driver=docker
π minikube v1.11.0 on Ubuntu 18.04
β¨ Using the docker driver based on existing profile
β 'docker' driver reported an issue: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
π‘ Suggestion: Start the Docker service
π Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
π£ Failed to validate 'docker' driver
@dandosi @nezorflame
@0x0I @nnashok
our latest beta release has this feature
https://github.com/kubernetes/minikube/releases/tag/v1.12.0-beta.0
closing as it was fixed
Why was this closed? I said in https://github.com/kubernetes/minikube/issues/7420#issuecomment-647935937 that its not fixed. I think? I tried again with the release binaries and they still don't work.
@medyagh @nnashok there must've been some confusion. This is _definitely_ not fixed... DOCKER_HOST is not uptaken in this
minikube version: v1.12.0-beta.0
commit: 275d827088c304049eb0b042c00fde5706520fec
I transfer data from unix socket to tcp port, it works for me.
sudo socat UNIX-LISTEN:/var/run/docker.sock,user=<your_user_name>,fork TCP:127.0.0.1:2375
@relaxgo This works!! I need to run that command in the background, i.e.
sudo socat UNIX-LISTEN:/var/run/docker.sock,user=<your_user_name>,fork TCP:127.0.0.1:2375 &
but after that, I can run minikube start even in my WSL1, which does NOT work with minikube version 1.12.3 without your socket wiring.