What happened:
When going through the steps described on https://kind.sigs.k8s.io/docs/user/ingress/ for the NGINX Controller on a completely new cluster:
curl localhost/foo and curl localhost/bar brings back curl: (52) Empty reply from serverkubectl proxy with curl http://127.0.0.1:8001/api/v1/namespaces/default/services/bar-service/proxy/ or by kubectl port-forward to a pod, everything is fine and the expected output is shown.What you expected to happen:
# should output "foo"
curl localhost/foo
# should output "bar"
curl localhost/bar
How to reproduce it (as minimally and precisely as possible):
Follow the steps on https://kind.sigs.k8s.io/docs/user/ingress/ and configure the NGINX Controller.
Anything else we need to know?:
There's nothing else listening on port 80. Here's some info:
idea 493 42u IPv4 0xf70fa8bd3ae1fa53 0t0 TCP 127.0.0.1:6942 (LISTEN)
idea 493 502u IPv4 0xf70fa8bd401778f3 0t0 TCP 127.0.0.1:63342 (LISTEN)
rapportd 495 4u IPv4 0xf70fa8bd40602e13 0t0 TCP *:53442 (LISTEN)
rapportd 495 5u IPv6 0xf70fa8bd37a85523 0t0 TCP *:53442 (LISTEN)
Adobe\x20 884 10u IPv4 0xf70fa8bd405ff2d3 0t0 TCP 127.0.0.1:15292 (LISTEN)
com.docke 41569 10u IPv4 0xf70fa8bd5ca20433 0t0 TCP 127.0.0.1:52252 (LISTEN)
kubectl 92304 8u IPv4 0xf70fa8bd5a6bda53 0t0 TCP 127.0.0.1:8001 (LISTEN)
com.docke 97972 30u IPv6 0xf70fa8bd551af3c3 0t0 TCP *:80 (LISTEN)
com.docke 97972 31u IPv6 0xf70fa8bd551b0c43 0t0 TCP *:443 (LISTEN)
com.docke 97972 32u IPv4 0xf70fa8bd5659d8f3 0t0 TCP 127.0.0.1:51270 (LISTEN)
Environment:
kind version): kind v0.8.1 go1.14.2 darwin/amd64kubectl version): v1.18.2 for both client/serverdocker info): 19.03.8 Docker on Mac/etc/os-release): Mac OS X 10.15.4are the deployments running?
Were the images able to pull from quay?
The most recent issue ... https://github.com/kubernetes-sigs/kind/issues/1617#issuecomment-632089112
/assign @amwat
com.docke 97972 30u IPv6 0xf70fa8bd551af3c3 0t0 TCP *:80 (LISTEN)
com.docke 97972 31u IPv6 0xf70fa8bd551b0c43 0t0 TCP *:443 (LISTEN)
This seems related to IPv6 and not ingress-nginx?
https://github.com/kubernetes-sigs/kind/issues/1326#issuecomment-584618690
hmm we don't have ipv6 enabled by default in the config, though I could imagine the docker port forwards still wind up IPv6 somehow cc @aojea
Most probably localhost is resolving to ::1 (use to be the default in all linux distros) and , as @aledbf correctly linked, we realized in that issues docker does portmapping using an userland proxy.
@arch1ve can you paste the output of curl -v localhost/foo and curl 127.0.0.1/foo?
~ curl -v localhost/foo
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /foo HTTP/1.1
> Host: localhost
> User-Agent: curl/7.64.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
* Closing connection 0
~ curl -v 127.0.0.1/foo
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /foo HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.64.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
* Closing connection 0
Here you go @aojea . I had stumbled across the issue @aledbf linked before opening this one and tried to figure out what's happening, but to no avail.
Any IPv6 queries also time out
~ curl -v -6 "[::1]/foo"
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 80 failed: Operation timed out
* Failed to connect to ::1 port 80: Operation timed out
* Closing connection 0
curl: (7) Failed to connect to ::1 port 80: Operation timed out
@arch1ve I can't reproduce it, something is replying on port 80, let's try to see what's inside the container, can you paste:
docker exec -it kind-control-plane iptables-save | grep 80
and kubectl get pods -A
@aojea here you go:
docker exec -it kind-control-plane iptables-save | grep 80
:PREROUTING ACCEPT [1139780:372011057]
:INPUT ACCEPT [1139780:372011057]
:POSTROUTING ACCEPT [1061786:244515803]
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
:OUTPUT ACCEPT [28:1680]
-A CNI-DN-fd5be791c68c2eba402d9 -s 10.244.0.17/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-fd5be791c68c2eba402d9 -s 127.0.0.1/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-fd5be791c68c2eba402d9 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.244.0.17:80
-A CNI-HOSTPORT-DNAT -p tcp -m comment --comment "dnat name: \"kindnet\" id: \"966e586eae7ccc89195d7a17e61f7a3ab9fe232580721c328a2c91fbabe5ed58\"" -m multiport --dports 80,443 -j CNI-DN-fd5be791c68c2eba402d9
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-SEP-TGD4WP4KR4SPBQBV -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -m tcp -j DNAT --to-destination 10.244.0.17:80
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.101.178.138/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.101.178.138/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-CG5I4G2RS3ZVWGLK
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default bar-app 1/1 Running 0 2m49s
default foo-app 1/1 Running 0 2m49s
ingress-nginx ingress-nginx-admission-create-9h5qc 0/1 Completed 0 3m45s
ingress-nginx ingress-nginx-admission-patch-67ld5 0/1 Completed 0 3m43s
ingress-nginx ingress-nginx-controller-cc8dd9868-fwf86 1/1 Running 0 3m23s
kube-system coredns-66bff467f8-h2zl6 1/1 Running 0 104m
kube-system coredns-66bff467f8-sxv9c 1/1 Running 0 104m
kube-system etcd-kind-control-plane 1/1 Running 0 104m
kube-system kindnet-7czpg 1/1 Running 0 104m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 104m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 104m
kube-system kube-proxy-n52rm 1/1 Running 0 104m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 104m
local-path-storage local-path-provisioner-bd4bb6b75-ssttz 1/1 Running 0 104m
馃 that seems ok ... if the curl works inside the node
docker exec -it kind-control-plane curl localhost/foo
foo
it has to be something between your host and the docker node, can you verify that the docker port mapping is working correctly? something like
docker run -p 8000:80 -d nginx
curl localhost:8000
@aojea Both of your suggestions worked as expected (unfortunately...).
What I ended up doing was getting rid of Docker for Mac and rebuilding everything using docker-machine. Then, querying the VM's IP with /foo and /bar worked as expected.
So, even without Docker binding anything on ports 80/443, something would come back with the Empty reply from server error which I still haven't figured out where it's coming from. However, it seems to be a problem with my local setup and will close the issue.
Thanks for your help!
@aojea I'm seeing the same thing with kind on Docker for Mac. Is the empty address field for the ingress a clue?
$ kind --version
kind version 0.8.1
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> * 80 9s
$ curl localhost/foo
curl: (52) Empty reply from server
Running nginx and exposing it as port 80 on localhost behaves as expected (with a caveat that I'll come to)
$ docker run -p 80:80 -d nginx
<id>
$ curl localhost
<returns nginx page>
My caveat here: Docker was able to bind to port 80 on the Mac for this container, so the ingress can't be listening on it as well.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default bar-app 1/1 Running 1 88m
default foo-app 1/1 Running 1 88m
kube-system coredns-66bff467f8-49qzt 1/1 Running 1 134m
kube-system coredns-66bff467f8-dzr4n 1/1 Running 1 134m
kube-system etcd-kind-control-plane 1/1 Running 0 26m
kube-system kindnet-ztg5g 1/1 Running 1 134m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 27m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 1 134m
kube-system kube-proxy-pmhcx 1/1 Running 1 134m
kube-system kube-scheduler-kind-control-plane 1/1 Running 1 134m
local-path-storage local-path-provisioner-bd4bb6b75-v5xpd 1/1 Running 2 134m
projectcontour contour-6ff596f8f-v6xph 1/1 Running 1 98m
projectcontour contour-6ff596f8f-x6gmg 1/1 Running 1 98m
projectcontour contour-certgen-v1.6.0-mmh7v 0/1 Completed 0 98m
I don't see anything related to port 80 with the iptables-save command mentioned above:
$ docker exec -it kind-control-plane iptables-save | grep 80
:FORWARD ACCEPT [28:1680]
:OUTPUT ACCEPT [316561:80577426]
:POSTROUTING ACCEPT [316589:80579106]
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 10.103.139.135/32 -p tcp -m comment --comment "projectcontour/envoy:http has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
:OUTPUT ACCEPT [223:13380]
-A DOCKER_OUTPUT -d 192.168.65.2/32 -p udp -m udp --dport 53 -j DNAT --to-destination 127.0.0.11:36804
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p udp -m udp --sport 36804 -j SNAT --to-source 192.168.65.2:53
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-SEP-LALXIVDIK52RYWNI -p tcp -m comment --comment "projectcontour/contour:xds" -m tcp -j DNAT --to-destination 10.244.0.6:8001
-A KUBE-SEP-S5TSDZMJHAVUVUYO -p tcp -m comment --comment "projectcontour/contour:xds" -m tcp -j DNAT --to-destination 10.244.0.5:8001
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.109.217.37/32 -p tcp -m comment --comment "projectcontour/contour:xds cluster IP" -m tcp --dport 8001 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.109.217.37/32 -p tcp -m comment --comment "projectcontour/contour:xds cluster IP" -m tcp --dport 8001 -j KUBE-SVC-MS6EAJRQA5KVS2EW
@lizrice what麓s your kind config, as you said you should not be able to run the nginx container on port 80 if you are portMapping that port to the KIND cluster.
Can you check you are following these instructions https://kind.sigs.k8s.io/docs/user/ingress/?
Same problem
kind.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.0/deploy/static/provider/baremetal/deploy.yaml
kubectl run --rm -ti --image=alpine alpine
curl ingress-nginx-controller.ingress-nginx.svc.cluster.local -H'Host: wordpress.lcl'
md5-772d5d66e88c587e21ad0b2e32a9178e
docker exec -it kind-control-plane iptables-save | findstr ':80'
C:\Users\Admin\Documents\upiple\ops\kubernetes\local_stand\apps\helm_values>docker exec -it kind-control-plane iptables-save | findstr :80
-A KUBE-SEP-6HCLR465FHDMF5KO -p tcp -m comment --comment "wordpress/wordpress:http" -m tcp -j DNAT --to-destination 10.244.0.17:8080
-A KUBE-SEP-OHFXIPID37ML5UU7 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -m tcp -j DNAT --to-destination 10.244.0.26:80
I know this is closed, but maybe this can be of help.
curl --haproxy-protocol localhost/bar
I assume this will deliver the result.
Background is this:
listen 80 proxy_protocol
So without
PROXY TCP4 127.0.0.1 127.0.0.1 0 80
In the request it will answer with
curl: (52) Empty reply from server
This means that the ingress is configured to be used behind a loadbalncer using proxy protocol
@hingstarne I'm having this behaviour. Do you know how can I resolve it?
@hingstarne I'm having this behaviour. Do you know how can I resolve it?
Does the
curl --haproxy-protocol $myurl
work for you?
Then you can validate in your settings that the proxy protocol is disabled.
In case its deployed via the default helm chart.
The value for use-proxy-protocol needs to be false if you do not use haproxy, elb or similar reverse proxies in front.
use-proxy-protocol: "false"
@hingstarne thanks for your reply. It worked.
Most helpful comment
I know this is closed, but maybe this can be of help.
I assume this will deliver the result.
Background is this:
So without
In the request it will answer with
This means that the ingress is configured to be used behind a loadbalncer using proxy protocol