What happened:
port-forwarding to a Kind cluster doesn't appear to work after 1.18.2 (works OK on 1.18.0)
How to reproduce it (as minimally and precisely as possible):
kind create cluster --image=kindest/node:v1.18.2
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-html
labels:
app: example-html
spec:
selector:
matchLabels:
app: example-html
template:
metadata:
labels:
app: example-html
spec:
containers:
- name: example-html
image: busybox
command: ["busybox", "httpd", "-f", "-p", "8000"]
ports:
- containerPort: 8000
Wait until the pod is ready
Run
kubectl port-forward deployment/example-html 8000:8000
Expected result:
a 404 response
Actual result:
Connection hangs indefinitely
In the kubectl port-forward terminal, I see:
E0710 19:56:07.104125 2250228 portforward.go:400] an error occurred forwarding 8000 -> 8000: error forwarding port 8000 to pod 7e6be7fb9a3b95c5c80f206f931b339d955f4fd9423a393f7ee072c477d9f370, uid : failed to execute portforward in network namespace "/var/run/netns/cni-1b6beba4-efb4-40fc-741e-fcce3506a3d3": socat command returns error: exit status 1, stderr: "2020/07/10 23:56:07 socat[1814] E connect(5, AF=2 92.242.140.21:8000, 16): Connection timed out\n"
Anything else we need to know?:
If I start Kind with kind create cluster --image=kindest/node:v1.18.0, port-forwarding works fine
I am so curious what the problem is here, and would love tips on how to debug this further. I've been banging my head against the wall for awhile
Environment:
kind version): kind v0.8.1 go1.14.3 linux/amd64kubectl version):Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-07-07T14:04:52Z", GoVersion:"go1.13.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"
Docker version: (use docker info):
Server Version: 19.03.11
OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
fIrst a reminder that using arbitrary images like this is NOT supported. if you're consuming the images we build, please check the release notes for the exact supported images for you release. We do wind up pushing new images later for other releases.
https://github.com/kubernetes-sigs/kind/releases has the images with digests for each kind version.
@aojea this smells like something with gocat https://github.com/containerd/cri/pull/1470
yes, sorry!! To be clear, port-forwarding is broken with the default image. I was using the --image= flag to try to diagnose why it's broken, didn't mean to imply that I expected this to be a good way to operate kind :joy:
i need to check when we picked up that change, the socat mechanism previously had some issues downstream (and upstream), the change linked above should have resolved those, i think this may not be in the 0.8.1 images though.
I'm a bit surprised this is the first we've heard of this breaking on the default.
can you test at HEAD? we have an unadvertised as of yet binary build used for kubernetes CI if that helps, https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
I wonder if the gocat change solved this for us :crossed_fingers:
(sorry for the delay, off looking into https://github.com/kubernetes/kubernetes/issues/92937)
nope :\
Tried it with kind v0.9.0-alpha+aa1758fefe4384 go1.15beta1 linux/amd64
Now I see:
E0710 22:17:44.256714 2813422 portforward.go:400] an error occurred forwarding 8000 -> 8000: error forwarding port 8000 to pod a5a58fd29dc401dfd137eb3bb07b43eef12f0b8834656d26bafd9aa6ccd2660c, uid : failed to execute portforward in network namespace "/var/run/netns/cni-43788b14-632a-bca7-00f1-184de8474b49": failed to dial 8000: dial tcp4 92.242.140.21:8000: connect: connection timed out
(also, totally not urgent, good luck fixing all kubernetes PRs :grimacing:)
This is weird because there is an e2e covering this.
@nicks how is it trying to dial to a public ip? 92.242.140.21:8000
Is there something "special" in your KIND environment?
@aojea I have no idea why it's doing that! Is there a diagnostic tool I can run that would help figure out the significance of that IP? Maybe it's the IP it thinks the pod is at for some reason?
The only think "special" I can think of is I just installed Ubuntu 20.04 a month or two ago
can you paste the output of kubectl get nodes -o wide and kubectl get pods -A -o wide ?
$ kubectl get nodes -o wide && kubectl get pods -A -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready master 63m v1.18.2 172.22.0.2 <none> Ubuntu 19.10 5.4.0-40-generic containerd://1.3.3-14-g449e9269
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default example-html-b9fdd4cb5-g7pcm 1/1 Running 0 61m 10.244.0.5 kind-control-plane <none> <none>
kube-system coredns-66bff467f8-2h666 1/1 Running 0 62m 10.244.0.2 kind-control-plane <none> <none>
kube-system coredns-66bff467f8-xnfn5 1/1 Running 0 62m 10.244.0.4 kind-control-plane <none> <none>
kube-system etcd-kind-control-plane 1/1 Running 0 62m 172.22.0.2 kind-control-plane <none> <none>
kube-system kindnet-79sdv 1/1 Running 0 62m 172.22.0.2 kind-control-plane <none> <none>
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 62m 172.22.0.2 kind-control-plane <none> <none>
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 62m 172.22.0.2 kind-control-plane <none> <none>
kube-system kube-proxy-98s6q 1/1 Running 0 62m 172.22.0.2 kind-control-plane <none> <none>
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 62m 172.22.0.2 kind-control-plane <none> <none>
local-path-storage local-path-provisioner-bd4bb6b75-w4wgw 1/1 Running 0 62m 10.244.0.3 kind-control-plane <none> <none>
it seems you are using an old node image, KIND 0.8.1 uses ubuntu 20.04 as base
https://github.com/kubernetes-sigs/kind/commit/6548f19dfabd09e93a69cdcd387c6cb37715f98e
I don't see the relation but better have a consistent state before continue investigating, can you update your kind image manually and recreate the cluster?
docker pull kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f
Hmmmm...that doesn't fix the issue:
re: "KIND 0.8.1 uses ubuntu 20.04 as base"
are you sure? that's not what I'm seeing.
nick@dopey:~/src/scratch$ kind version
kind v0.8.1 go1.14.3 linux/amd64
nick@dopey:~/src/scratch$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready master 7m27s v1.18.2 172.22.0.2 <none> Ubuntu 19.10 5.4.0-40-generic containerd://1.3.3-14-g449e9269
nick@dopey:~/src/scratch$ docker inspect /kind-control-plane | grep kindest
"Image": "kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f",
are you sure? that's not what I'm seeing.
upps, I was using kind from master, never mind, let's get a more verbose output of the forwarding command
kubectl port-forward deployment/example-html 8000:8000 -v7
$ kubectl port-forward deployment/example-html 8000:8000 -v7
I0712 17:41:11.588292 3994501 loader.go:375] Config loaded from file: /home/nick/.kube/config
I0712 17:41:11.588827 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/api?timeout=32s
I0712 17:41:11.588835 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.588840 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.588862 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.594893 3994501 round_trippers.go:446] Response Status: 200 OK in 6 milliseconds
I0712 17:41:11.596875 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis?timeout=32s
I0712 17:41:11.596886 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.596890 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.596893 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.597672 3994501 round_trippers.go:446] Response Status: 200 OK in 0 milliseconds
I0712 17:41:11.600223 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/authorization.k8s.io/v1?timeout=32s
I0712 17:41:11.600243 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600253 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600261 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600238 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/api/v1?timeout=32s
I0712 17:41:11.600280 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/discovery.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600291 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600301 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600309 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600313 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/apiregistration.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600328 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600335 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/extensions/v1beta1?timeout=32s
I0712 17:41:11.600336 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600353 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600283 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600377 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600387 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600396 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/authentication.k8s.io/v1?timeout=32s
I0712 17:41:11.600494 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600501 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/apiregistration.k8s.io/v1?timeout=32s
I0712 17:41:11.600520 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600527 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600537 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600539 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/storage.k8s.io/v1?timeout=32s
I0712 17:41:11.600552 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600564 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600573 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600522 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600597 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600599 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/authentication.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600644 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/autoscaling/v1?timeout=32s
I0712 17:41:11.600646 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600662 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600670 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600687 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/storage.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600696 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600707 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/autoscaling/v2beta1?timeout=32s
I0712 17:41:11.600720 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600728 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600654 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600748 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600754 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600348 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600762 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600768 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600778 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600786 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/admissionregistration.k8s.io/v1?timeout=32s
I0712 17:41:11.600798 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600804 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600604 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600818 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600274 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/apps/v1?timeout=32s
I0712 17:41:11.600843 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600853 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600861 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600865 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/autoscaling/v2beta2?timeout=32s
I0712 17:41:11.600874 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600702 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600875 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600887 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600588 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/authorization.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.600919 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600926 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600930 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/apiextensions.k8s.io/v1?timeout=32s
I0712 17:41:11.600891 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.600950 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600963 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600994 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0712 17:41:11.601010 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601013 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.601025 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601031 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.600932 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601037 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600504 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601071 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/batch/v1beta1?timeout=32s
I0712 17:41:11.601011 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/policy/v1beta1?timeout=32s
I0712 17:41:11.601099 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601108 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601116 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601151 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/networking.k8s.io/v1?timeout=32s
I0712 17:41:11.601161 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/networking.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.601197 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601210 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601225 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/node.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.601225 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601210 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/scheduling.k8s.io/v1?timeout=32s
I0712 17:41:11.601252 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601261 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601265 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/scheduling.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.601283 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601294 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601305 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601152 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/coordination.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.601350 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.601954 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.601990 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601236 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602091 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602106 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601269 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601018 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602243 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601076 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600881 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602382 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600943 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602456 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602477 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600943 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/batch/v1?timeout=32s
I0712 17:41:11.602544 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602556 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602566 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600810 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.600702 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/events.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.602691 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602702 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602717 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601090 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602785 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602793 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601164 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602862 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602870 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601229 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/certificates.k8s.io/v1beta1?timeout=32s
I0712 17:41:11.602931 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.602945 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.602953 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.601340 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/coordination.k8s.io/v1?timeout=32s
I0712 17:41:11.603630 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.603641 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.603650 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.603724 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.603749 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.603772 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.603773 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604109 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604136 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604244 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604283 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604303 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604313 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604418 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604435 3994501 round_trippers.go:446] Response Status: 200 OK in 4 milliseconds
I0712 17:41:11.604443 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604460 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604543 3994501 round_trippers.go:446] Response Status: 200 OK in 4 milliseconds
I0712 17:41:11.604550 3994501 round_trippers.go:446] Response Status: 200 OK in 1 milliseconds
I0712 17:41:11.604553 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604564 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604852 3994501 round_trippers.go:446] Response Status: 200 OK in 2 milliseconds
I0712 17:41:11.604894 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.604917 3994501 round_trippers.go:446] Response Status: 200 OK in 2 milliseconds
I0712 17:41:11.604901 3994501 round_trippers.go:446] Response Status: 200 OK in 2 milliseconds
I0712 17:41:11.604996 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.605191 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.605302 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606058 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606093 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606122 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606150 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606176 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606199 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.606258 3994501 round_trippers.go:446] Response Status: 200 OK in 2 milliseconds
I0712 17:41:11.606329 3994501 round_trippers.go:446] Response Status: 200 OK in 3 milliseconds
I0712 17:41:11.658001 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/apis/apps/v1/namespaces/default/deployments/example-html
I0712 17:41:11.658016 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.658021 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.658025 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.659729 3994501 round_trippers.go:446] Response Status: 200 OK in 1 milliseconds
I0712 17:41:11.666235 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/api/v1/namespaces/default/pods?labelSelector=app%3Dexample-html
I0712 17:41:11.666253 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.666260 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.666266 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.668103 3994501 round_trippers.go:446] Response Status: 200 OK in 1 milliseconds
I0712 17:41:11.673459 3994501 round_trippers.go:420] GET https://127.0.0.1:35045/api/v1/namespaces/default/pods/example-html-b9fdd4cb5-sd4zl
I0712 17:41:11.673477 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.673484 3994501 round_trippers.go:431] Accept: application/json, */*
I0712 17:41:11.673489 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.675508 3994501 round_trippers.go:446] Response Status: 200 OK in 2 milliseconds
I0712 17:41:11.682912 3994501 round_trippers.go:420] POST https://127.0.0.1:35045/api/v1/namespaces/default/pods/example-html-b9fdd4cb5-sd4zl/portforward
I0712 17:41:11.682940 3994501 round_trippers.go:427] Request Headers:
I0712 17:41:11.682978 3994501 round_trippers.go:431] User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8
I0712 17:41:11.682994 3994501 round_trippers.go:431] X-Stream-Protocol-Version: portforward.k8s.io
I0712 17:41:11.710796 3994501 round_trippers.go:446] Response Status: 101 Switching Protocols in 27 milliseconds
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
Handling connection for 8000
Handling connection for 8000
E0712 17:43:33.374304 3994501 portforward.go:400] an error occurred forwarding 8000 -> 8000: error forwarding port 8000 to pod 014a29434c9b86ff4cf321ca7d3c05b7fa01b11671930245d39d7455426ad6f6, uid : failed to execute portforward in network namespace "/var/run/netns/cni-b5b4448b-2ad7-5d44-52ab-7139b0cc7fdc": socat command returns error: exit status 1, stderr: "2020/07/12 21:43:33 socat[9317] E connect(5, AF=2 92.242.140.21:8000, 16): Connection timed out\n"
E0712 17:43:33.376165 3994501 portforward.go:400] an error occurred forwarding 8000 -> 8000: error forwarding port 8000 to pod 014a29434c9b86ff4cf321ca7d3c05b7fa01b11671930245d39d7455426ad6f6, uid : failed to execute portforward in network namespace "/var/run/netns/cni-b5b4448b-2ad7-5d44-52ab-7139b0cc7fdc": socat command returns error: exit status 1, stderr: "2020/07/12 21:43:33 socat[9318] E connect(5, AF=2 92.242.140.21:8000, 16): Connection timed out\n"
Handling connection for 8000
TIL :/
Question about that IP https://askubuntu.com/q/587895
and its answer https://askubuntu.com/a/587954
hahahahaha
As an experiment, I enabled airplane mode on my laptop, created a kind cluster, disabled airplane mode, then ran the kubectl commands to create the deployment and port-forward.
That fixed the problem :grimacing:
So I guess something is caching the DNS lookup?
OK, I tcpdump'd kind on startup, and my current theory is that this is the problem:
https://github.com/kubernetes-sigs/kind/blob/0c6a477b744e07a108e9889acc8d975a2e46c388/images/base/files/usr/local/bin/entrypoint#L175
It's doing a dnslookup on host.docker.internal, and if it resolves to an IP, it assumes that must be docker for mac. But that's not a safe assumption on all environments
It's doing a dnslookup on host.docker.internal, and if it exists, it assumes that must be docker for mac.
I don't think that's the problem, that will break docker in windows and docker in mac :/, and seems that only port-forwarding is broken in your cluster
https://docs.docker.com/docker-for-windows/networking/#use-cases-and-workarounds
I think that the problem has to be in the port-forwarding area, it is the socat command who is trying to connect to the weird ip address,
": socat command returns error: exit status 1, stderr: "2020/07/12 21:43:33 socat[9317] E connect(5, AF=2 92.242.140.21:8000, 16)
I've replaced the socat code recently in containerd, but if you see the code, it always tries to connect to localhost, so my theory is that localhost is resolving to 92.242.140.21
does it make sense?
It's doing a dnslookup on host.docker.internal, and if it resolves to an IP, it assumes that must be docker for mac. But that's not a safe assumption on all environments
This shouldn't matter, we're just selecting an address to use for something else in iptables.
@aojea is there a reason we're connecting to localhost instead of a loopback IP?
@nicks can you dump the hosts file in the node?
ya, i think you're right. tcpdump is showing that localhost is resolving to that IP inside the node
11:55:56.251963 IP dopey.37404 > _gateway.domain: 3304+ A? localhost. (27)
11:55:56.256234 IP dopey.55685 > _gateway.domain: 10802+ A? localhost. (27)
11:55:56.265134 IP _gateway.domain > dopey.55685: 10802 1/0/0 A 92.242.140.21 (43)
11:55:56.265289 IP _gateway.domain > dopey.37404: 3304 1/0/0 A 92.242.140.21 (43)
Inside the kind node:
# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.22.0.2 kind-control-plane
# getent ahostsv4 localhost
92.242.140.21 STREAM localhost
92.242.140.21 DGRAM
92.242.140.21 RAW
Did we break /etc/resolved.conf or is kubelet / socat not respecting /etc/hosts??
Did we break /etc/resolved.conf or is kubelet / socat not respecting /etc/hosts??
I don't think is kind, with airplane mode it works, anyway, I'd like to go to the bottom of this, is something we should prevent if we can IMHO
# cat /etc/resolv.conf
nameserver 92.242.140.21
options ndots:0
:thinking:
I think I have a workaround
FROM kindest/node:v1.18.2
RUN sed -i -e 's/hosts: dns files/hosts: files dns/' /etc/nsswitch.conf
RUN cat /etc/nsswitch.conf
for some reason, nsswitch.conf is configured to check dns before it checks /etc/hosts?
If I use the image built with that dockerfile, everything works correctly
Yeah, I think that's the regression, maybe a regression in the base image?
This is a familiar bug in the docker / kubernetes / golang in a container space.
I think I have a workaround
FROM kindest/node:v1.18.2 RUN sed -i -e 's/hosts: dns files/hosts: files dns/' /etc/nsswitch.conf RUN cat /etc/nsswitch.conffor some reason, nsswitch.conf is configured to check dns before it checks /etc/hosts?
If I use the image built with that dockerfile, everything works correctly
I think that is needed for the reboots
https://github.com/kubernetes-sigs/kind/pull/1521
we should try to solve it with something like this
https://www.freedesktop.org/software/systemd/man/nss-myhostname.html#
@aojea we should revert to preferring files then DNS, and ensure we just don't leave any stale hosts entries in /etc/hosts?
@aojea we should revert to preferring files then DNS, and ensure we just don't leave any stale hosts entries in /etc/hosts?
I tested locally, deploy a cluster, edited the /etc/hosts, restart the node and it boots with the right /etc/hosts file
I'd say yes to prefer files , but I'm still not sure if the reboot was the only reason :man_shrugging: ,
anyway, seems the only way to know it is reverting to file, so +1
I just remembered, we had to use DNS instead of files for ipv6
I need to understand better how systemd-resolved is working in the system,
https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html#:~:text=systemd%2Dresolved%20is%20a%20system,and%20MulticastDNS%20resolver%20and%20responder.
maybe we can simplify the dns magic ... I need to experiment with this, do you mind holding on this or is very critical?
scratch my systemd-resolved comment, is disabled inside the container and overcomplicate everything
I tested it locally and seems to work using files first, I submitted a PR to test in the CI
https://github.com/kubernetes-sigs/kind/pull/1731
if it works I think we should make the change
Most helpful comment
yes, sorry!! To be clear, port-forwarding is broken with the default image. I was using the --image= flag to try to diagnose why it's broken, didn't mean to imply that I expected this to be a good way to operate kind :joy: