Kind: Clarification on Ingress

Created on 17 Mar 2020  Â·  12Comments  Â·  Source: kubernetes-sigs/kind

What would you like to be documented:

The Ingress installation should be explained a bit more.

Why is this needed:

The Load Balancer port for the service is a random nodeport. (30000-32767) This is the service which is provisioned as part of Contour rollout or Nginx rollout (which is the frontend for the Envoy Pods or NGINX Pods, depending on the ingress choice)

The issue is, in the installation examples the hostport 80 gets forwarded to the containerport 80(Container in this case is the k8s master node in Contour example) However since the loadbalancer service for the actual Envoy or Nginx pods pick a random nodeport (generic K8S service type: loadbalancer process) the traffic forwarded from the actual host to the container (K8S Master Node in Kind) will be dropped.

For instance, on my Macbook, I followed the installation steps fo Contour, the LoadBalancer service for Envoy picked a random port as 32761 on the Kind nodes. So Ingress did not work.

What I did is, spinning up the Kind cluster with external porrtmapping config of forward hostport 80 to container port 32000, then forrward hostport 443 to container port 32001. Then I configured the Contour manifest and manually input the NodePort 32000 and NodePort 32001 to complete the data path from the client all the way into the Pods in the Kind Nodes.

Hope this has been explanatory enough.

kindocumentation

Most helpful comment

Hey @dumlutimuralp! Contour (by default) uses host ports for Envoy, so we bind ports 80/443 to the local node. So like @BenTheElder mentioned in the last comment, ports 80/443 get mapped via extraPortMapping to a node in the kind cluster.

The result should be that localhost:80 && localhost:443 map to an instance of Envoy running in your kind cluster.

Contour does have a service type LoadBalancer in the examples, but in your local kind cluster, you wouldn't use this service to send requests.

Does this answer your questions?

All 12 comments

With ingress nginx we've specifically set it to use 80->80->80 all the way through. Looking at it now though I see we have both the port 80 nodeport and the patch to nginx to set 80 hostPort which doesn't make sense cc @amwat

There is no type: Loadbalancer involved in the nginx deployment at least (maybe you're referring to the nodePort service?), I'm less familiar with contour at the moment.

For contour
/cc @stevesloka

re: ingress nginx we don't actually need the nodeport service, that was added to suppress logs that nginx was producing (https://github.com/kubernetes-sigs/kind/issues/1245). All we need is the hostport patch.

@amwat if there is no need for nodeport config, then can you please explain how are we supposed to make nginx ingress controller pod accessible from outside, in order for it to receive client http/https traffic ? I do not see this being addressed in the "mandatory.yaml" which is mentioned in the installation guide. My understanding is even if the externalportmapping is configured in kind it will only forward the traffic on a specific port on the host onwards to the Docker containers. But on the K8S Nodes themselves the related port still needs to be exposed through nodeport.
Thanks in advance.

it's handled by this step:
kubectl patch deployments -n ingress-nginx nginx-ingress-controller -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx-ingress-controller","ports":[{"containerPort":80,"hostPort":80},{"containerPort":443,"hostPort":443}]}],"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'

which applies the patch:

{
  "spec": {
    "template": {
      "spec": {
        "containers": [
        {
          "name": "nginx-ingress-controller",
          "ports": [
          {
            "containerPort": 80,
            "hostPort": 80
          },
          {
            "containerPort": 443,
            "hostPort": 443
          }
          ]
        }
        ],
        "nodeSelector": {
          "ingress-ready": "true"
        },
        "tolerations": [
        {
          "key": "node-role.kubernetes.io/master",
          "operator": "Equal",
          "effect": "NoSchedule"
        }
        ]
      }
    }
  }
}

amongst other things, this ensures that the nginx pod is exposing the ports with hostPorts.

(it also ensures that it is scheduled on the node we mapped your real host's ports to)

this means we have:

kind extraPortMapping 80 ->80 (aka something roughly like --publish 80:80 in docker run)
==> which is forwarded to ==>
hostPort 80 -> 80 (in the patched ingress-nginx pod) which means port 80 on that "host" (the kind "node") is mapped to 80 in that container

@BenTheElder apologies. I was only reviewing the steps for applying mandatory.yaml and service-nodeport.yaml. Thanks.
On the Contour side I will still be looking forward to @stevesloka ' s response since I am pretty sure it stiill requires an amendment in the Contour manifest to make the ingress controller accessible. That is how I made it work on my laptop.

Hey @dumlutimuralp! Contour (by default) uses host ports for Envoy, so we bind ports 80/443 to the local node. So like @BenTheElder mentioned in the last comment, ports 80/443 get mapped via extraPortMapping to a node in the kind cluster.

The result should be that localhost:80 && localhost:443 map to an instance of Envoy running in your kind cluster.

Contour does have a service type LoadBalancer in the examples, but in your local kind cluster, you wouldn't use this service to send requests.

Does this answer your questions?

Just a side note that port 80 and 443 require elevated privileges, e.g. the create cluster spec will fail if run as a non-privileged user:

Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.17.0) 🖼
 ✗ Preparing nodes 📦  
docker run error: command "docker run --hostname kind-control-plane --name kind-control-plane --label io.x-k8s.kind.role=control-plane --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.x-k8s.kind.cluster=kind --publish=80:80/TCP --publish=443:443/TCP --publish=127.0.0.1:0:6443/TCP kindest/node:v1.17.0@sha256:9512edae126da271b66b990b6fff768fbb7cd786c7d39e86bdf55906352fdf62" failed with error: exit status 125
ERROR: failed to create cluster: docker run error: command "docker run --hostname kind-control-plane --name kind-control-plane --label io.x-k8s.kind.role=control-plane --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.x-k8s.kind.cluster=kind --publish=80:80/TCP --publish=443:443/TCP --publish=127.0.0.1:0:6443/TCP kindest/node:v1.17.0@sha256:9512edae126da271b66b990b6fff768fbb7cd786c7d39e86bdf55906352fdf62" failed with error: exit status 125

yes, but that's also true for kind in general. unless you're giving an unprivileged user access to docker which (!)

you can edit the ports to something else if you want, we chose to use the same ports to try to avoid some of the confusion, it's port 80 / 443 all the way down

Yeah, I run docker as non-root user on my PC (https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-convenience-script). Rootless docker daemon is currently experimental (haven't tested it yet though). A note in the current documentation recapitulating your comment regarding running as non-root (use unprivileged ports i.e. 8080 and 8443 or run kind with sudo)? Just saying, could see this as becoming a frequent newbie error which could be mitigated with a note.

fair enough, I suppose we may have many users unfamiliar with permissions on low ports -- though I would still treat kind specifically as a privileged thing to run :-)
it's more isolated than just kubeadm init directly on your host, but not by a lot.

It's been hard to figure out exactly what to cover in our docs and find time to add more quality content, we set out to develop kubernetes itself, didn't expect to be teaching linux 🙃

We can probably find a good existing page on this sort of thing to link to from a small note.

It turned out that the CNI Plugin that I use is causing the issue. @stevesloka made me realize that Contour PODs are using the hostport anyway. It is obvious in that Contour manifest. I will raise a bug to the CNI plugin provider that I used in my lab. Thanks everyone.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

lilic picture lilic  Â·  4Comments

tommyknows picture tommyknows  Â·  3Comments

BenTheElder picture BenTheElder  Â·  4Comments

nielsvbrecht picture nielsvbrecht  Â·  3Comments

patvdleer picture patvdleer  Â·  4Comments