Steps to reproduce the issue:

Results of usual commands are normal.
❯ minikube service -n kong kong-kong-proxy
|-----------|-----------------|--------------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------|--------------------|------------------------|
| kong | kong-kong-proxy | kong-proxy/80 | http://10.88.0.4:32399 |
| | | kong-proxy-tls/443 | http://10.88.0.4:30415 |
|-----------|-----------------|--------------------|------------------------|
Returns a good result! (from wsl)
curl -i -H "Host: api.gengo.dev" http://10.88.0.4:32399/
minikube dashboard gives a good url that I can access on localhost.
Some investigation:
sudo nc -l 80 binds to localhost correctly and the browser can hit it.
Am I missing a step?
Thanks!
Looks like my understanding of minikube tunnel was not complete.
Would be awesome if we could pass a flag that would allow us to bind directly to localhost on the port.
@Zageron we currently dont support WSL but we have meged a PR on the head, that might fix this.
do you mind trying the binary in this comment and see if that fixes podman in wsl ?
https://github.com/kubernetes/minikube/issues/7420#issuecomment-638477659
Thanks for letting me know, I will look into this on the weekend.
@Zageron have you had a chance to try podman with that binray ?
Sorry I was too busy this weekend to spend any time doing my project so I haven't gotten the chance.
@Zageron
no worries, please update the issue whenever you had a chance to try
here is the link to the binary:
http://storage.googleapis.com/minikube-builds/8368/minikube-linux-amd64
you can just download that and try start --driver=podman in WSL
(ofcourse with an installed podman)
❯ minikube delete
🔥 Deleting "minikube" in podman ...
🔥 Deleting container "minikube" ...
🔥 Removing /home/zageron/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
❯ minikube start --vm-driver=podman
😄 minikube v1.11.0 on Ubuntu 20.04
▪ MINIKUBE_ACTIVE_DOCKERD=minikube
▪ MINIKUBE_ACTIVE_PODMAN=minikube
✨ Using the podman (experimental) driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating podman container (CPUs=2, Memory=12800MB) ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
❯ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 58s
❯ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
❯ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
deployment.apps/hello-minikube created
❯ kubectl expose deployment hello-minikube --type=NodePort --port=8080
service/hello-minikube exposed
❯ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-64b64df8c9-7fbtw 1/1 Running 0 13s
❯ minikube service hello-minikube --url
💣 error getting ssh port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Error: No such container: minikube
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
Likewise:
❯ minikube tunnel
💣 error getting ssh port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Error: No such container: minikube
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
❯ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
46906fc0e2fd docker/getting-started "nginx -g 'daemon of…" 57 seconds ago Up 56 seconds 0.0.0.0:80->80/tcp wizardly_almeida
fa297cd575dd k8s.gcr.io/echoserver "/usr/local/bin/run.…" 11 minutes ago Up 10 minutes k8s_echoserver_hello-minikube-64b64df8c9-7fbtw_default_f667cbf4-7265-452f-870a-631f61b37aa7_0
a6e942199e2a k8s.gcr.io/pause:3.2 "/pause" 11 minutes ago Up 11 minutes k8s_POD_hello-minikube-64b64df8c9-7fbtw_default_f667cbf4-7265-452f-870a-631f61b37aa7_0
aaaca3f57e27 67da37a9a360 "/coredns -conf /etc…" 12 minutes ago Up 12 minutes k8s_coredns_coredns-66bff467f8-cn2vw_kube-system_4f691f05-0980-4c15-8c08-e58366ccaac3_0
1f0ba6664c27 67da37a9a360 "/coredns -conf /etc…" 12 minutes ago Up 12 minutes k8s_coredns_coredns-66bff467f8-nh8hc_kube-system_e7ba95ba-88ed-416c-830d-e31de9b899a9_0
86e9436949f6 k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_coredns-66bff467f8-nh8hc_kube-system_e7ba95ba-88ed-416c-830d-e31de9b899a9_0
5174d9efb381 k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_coredns-66bff467f8-cn2vw_kube-system_4f691f05-0980-4c15-8c08-e58366ccaac3_0
5145a35dfd6f 4689081edb10 "/storage-provisioner" 12 minutes ago Up 12 minutes k8s_storage-provisioner_storage-provisioner_kube-system_c0f5af67-60cb-4b13-abdb-c5c5d70d50a3_0
7577eadf93f7 k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_storage-provisioner_kube-system_c0f5af67-60cb-4b13-abdb-c5c5d70d50a3_0
ce70c650f2d6 3439b7546f29 "/usr/local/bin/kube…" 12 minutes ago Up 12 minutes k8s_kube-proxy_kube-proxy-jq9dx_kube-system_26fc13e2-3dd1-476d-9679-65b4f188dfda_0
eee7838bdd9d k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_kube-proxy-jq9dx_kube-system_26fc13e2-3dd1-476d-9679-65b4f188dfda_0
1e414db5e54d 76216c34ed0c "kube-scheduler --au…" 12 minutes ago Up 12 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_a8caea92c80c24c844216eb1d68fe417_0
fb262454906d da26705ccb4b "kube-controller-man…" 12 minutes ago Up 12 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_a1a35f05a6e88c027f2a8abd67bbe285_0
1eed7714cf74 7e28efa976bd "kube-apiserver --ad…" 12 minutes ago Up 12 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_7f245b66b80274be3d0f17c93889f4a2_0
2316bb245853 303ce5db0e90 "etcd --advertise-cl…" 12 minutes ago Up 12 minutes k8s_etcd_etcd-minikube_kube-system_e9f8ae67bee0b523b784959777806875_0
791bf7401c0b k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_kube-scheduler-minikube_kube-system_a8caea92c80c24c844216eb1d68fe417_0
c9b53dce3d1e k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_kube-controller-manager-minikube_kube-system_a1a35f05a6e88c027f2a8abd67bbe285_0
1eeaf3afbe89 k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_kube-apiserver-minikube_kube-system_7f245b66b80274be3d0f17c93889f4a2_0
8efb05ad582f k8s.gcr.io/pause:3.2 "/pause" 12 minutes ago Up 12 minutes k8s_POD_etcd-minikube_kube-system_e9f8ae67bee0b523b784959777806875_0
so it seems like start works but then when you do service command it is running "docker" instead of podman !
docker container inspect -f
for podman driver it should not say docker container inspect -f ....
hmm... could plz try with a fresh start and try again
minikube delete --all
minikube start --driver=podman
(with the binary in that PR)
Seems to work fine if I just use
minikube start --driver=docker
I don't recall why I ended up having to use podman...
(Same behaviour with your request. I validated that I am using the correct build as well.)
Not sure what's broken though.
❯ skaffold dev
creating runner: creating builder: getting docker client: unable to parse minikube docker-env keyvalue: [# To point your shell to minikube's docker-daemon, run:], line: # To point your shell to minikube's docker-daemon, run:, output: DOCKER_TLS_VERIFY=1
DOCKER_HOST=tcp://127.0.0.1:32770
DOCKER_CERT_PATH=/home/zageron/.minikube/certs
MINIKUBE_ACTIVE_DOCKERD=minikube
# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)
❯ eval $(minikube -p minikube docker-env)
❯ skaffold dev
creating runner: creating builder: getting docker client: unable to parse minikube docker-env keyvalue: [# To point your shell to minikube's docker-daemon, run:], line: # To point your shell to minikube's docker-daemon, run:, output: DOCKER_TLS_VERIFY=1
DOCKER_HOST=tcp://127.0.0.1:32770
DOCKER_CERT_PATH=/home/zageron/.minikube/certs
MINIKUBE_ACTIVE_DOCKERD=minikube
# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)
Which is identical behaviour as podman.
So to summerize the wsl works fine now ?
Yes, I'll summarize my steps.
##Shell 1
minikube delete --all
minikube start --driver=docker
kubectl create namespace kong
helm install kong kong/kong --namespace kong --set ingressController.installCRDs=false
##Shell 2
minikube tunnel
##Shell 1
HOST=$(kubectl get svc --namespace kong kong-kong-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
PORT=$(kubectl get svc --namespace kong kong-kong-proxy -o jsonpath='{.spec.ports[0].port}')
export PROXY_IP=${HOST}:${PORT}
curl $PROXY_IP #Success
On local system, go to localhost
"no Route matched with those values"
Success!
Thanks for your assistance!
I think the main problem was that there are still a few hard-coded "docker" left in KIC...
cmd/minikube/cmd/service.go: port, err := oci.ForwardedPort(oci.Docker, configName, 22)
cmd/minikube/cmd/tunnel.go: port, err := oci.ForwardedPort(oci.Docker, cname, 22)