Kind: Support exposing additional ports to be used for NodePort services

Created on 11 Nov 2018  ยท  40Comments  ยท  Source: kubernetes-sigs/kind

kind currently only exposes a single Docker port to the host machine (Kubernetes API server).
As a result, It is impossible to access other services running inside the cluster.

In minikube NodePort is a solution for that.
minikube exposes all ports on a VM's IP address:

โฏ minikube service list
|----------------|-----------------------|--------------------------------|
|   NAMESPACE    |         NAME          |              URL               |
|----------------|-----------------------|--------------------------------|
| example        | example               | http://192.168.99.100:30980

that just print a VM IP address + port that you can query via

โฏ kubectl get -n example service example -o wide
NAME       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
example    NodePort    10.110.138.8     <none>        8080:30980/TCP,9090:30990/TCP   22m

This is also useful for testing Ingress controllers like Contour, https://github.com/heptio/contour/blob/master/docs/minikube.md

The trivial change I made to make it working is added extra flags to docker run in https://github.com/nilebox/kind/commit/99c4505c04b8b4893e2540ed9cab7aaeed9d9bf8.
After that I just declared a service with a NodePort type and a port from the exposed range, and I was able to access it via curl from the host machine.

Would it make sense to support such feature in kind upstream?
e.g. a flag for exposing extra ports to be passed to docker run.

kinfeature prioritbacklog

Most helpful comment

In order for Kind to be truly useful as a development tool it will have to support exposing in-cluster services to development tools. IMO this concept is a "must-have"

With minikube it's usually done with nodeport services, as per the title of this issue. I understand the the simple nodeport analogy doesn't work very well, and that to fake the nodeport functionality involves either a massive & premature port mapping or a complex service monitoring approach.

I suggest that it's better to implement service monitoring, but to do it in the context of the cloud-controller-manager scheme and support the idea of LoadBalancer services in Kind. Any service that declares itself a load-balancer type can be managed by a (possibly platform-specific) controller (pod) that interacts with the docker daemon and the external host to expose the service on some port.

All 40 comments

Supporting extra ports makes sense, note though that you can access (TCP) services via kubectl port-forward:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
UDP is not yet supported by kubernetes for this though.

I think we'd need to gate exposing extra ports on the node container behind some config, kind nominally supports running multiple clusters per machine (--name) so we couldn't just bind those ports on all clusters by default as far as I know.

/kind feature

Er, and I'm on the fence as to if this should be a flag or be in config. I'd like to keep the flag surface from exploding, but it seems like this might be commonly used enough to put in both with the flag overriding.

cc @munnerz @neolit123

+1 for config only.

ExposedPortRanges []strings
per cluster.

I think we'd need to gate exposing extra ports on the node container behind some config, kind nominally supports running multiple clusters per machine (--name) so we couldn't just bind those ports on all clusters by default as far as I know.

hm, if multiple clusters are supported better expose settings for them in the config?

pseudo tree:

config:
    clusters:
        control-planes: ...
        exported-port-ranges: ...

one config file per cluster currently, though I suppose we could require named configs within a single file, that seems clunky though and there's no guarantee that all of the clusters in the config exist.

when we add preflight checks we can scan for existing clusters by inspecting the labeled containers and we can label metadata like this that needs to be known.

In order to make the NodePort ports available to the host machine, it is not enough to export them when launching the control node's container by means of the --export argument. Exportin only make the ports available to other containers, not to the host machine.

It would be necessary also to bind them to ports in the host machine using the -p option. However, the port range used by default by Kubernetes is quite large (3200-32767) and therefore in undesirable to bind them all in advance

Therefore, it is necessary to bind them selectively to a port in the host machine and associate it the corresponding port in the control plane container.

I support this feature as well, however, @nilebox a temporary workaround is to use alpine/socat to bind the ports from kind to your host if you need something right now

I have confirmed that NodePortservices are exposed to the host by means of kind node's addresses in the docker bridge network. No additional configuration is required.

Acccordig to previous comment, this feature can be implemented in a similar way to minikubes services command

ip := getClusterIP()
services := getServices()
for s, _ in services {
    printUrl(s.ns, s.name, ip, s.port)

@pablochacin the bridge network should work on linux, but mac / windows will be trickier.

@BenTheElder I'll educate on these platforms, as I'm not familiar with them.

However, according to Kubernetes recommendations for networking on MacOS it looks like that the most reliable way is to create a container that exposes an port and forwards the traffic to the service's port. This could be implemented using socat as suggested before in this issue.

This would probably make more sense to be done as an expose service command, which creates such container. Not sure if this satisfies the expected user experience. One more complex alternative is to mimic what kube-proxy does and create/delete the forwarding container by watching the services in the cluster, but this approach seams to introduce an unnecessary level of complexity.

In order for Kind to be truly useful as a development tool it will have to support exposing in-cluster services to development tools. IMO this concept is a "must-have"

With minikube it's usually done with nodeport services, as per the title of this issue. I understand the the simple nodeport analogy doesn't work very well, and that to fake the nodeport functionality involves either a massive & premature port mapping or a complex service monitoring approach.

I suggest that it's better to implement service monitoring, but to do it in the context of the cloud-controller-manager scheme and support the idea of LoadBalancer services in Kind. Any service that declares itself a load-balancer type can be managed by a (possibly platform-specific) controller (pod) that interacts with the docker daemon and the external host to expose the service on some port.

In order for Kind to be truly useful as a development tool ...

For what it's worth, kind is already used as a development tool for a number of projects ๐Ÿ™ƒ
This will certainly be very nice for many use cases, and we'd like to see it, but there are a lot of other things to finish along the way and this can definitely be added on top of the rest.

With minikube it's usually done with nodeport services, as per the title of this issue. I understand the the simple nodeport analogy doesn't work very well, and that to fake the nodeport functionality involves either a massive & premature port mapping or a complex service monitoring approach.

NodePorts work today and actually make a lot of sense, they're just only accessible from within the docker network. Roughly: your cluster is on a private network and only the API server had a public endpoint. You can still create and use NodePorts, but to access them you need to bridge the network.

Forwarding to a specific NodePort with a socat container is pretty trivial (see above) ... wrapping it up in a nice tool would take more work.

kubectl port-forward also works. Talking to services from within the cluster works, in-cluster services talking to the API server works, talking to the outside world works, talking from another container on the network works.

There are a number of options to work around this today (see comments above for some of them).

I suggest that it's better to implement service monitoring, but to do it in the context of the cloud-controller-manager scheme and support the idea of LoadBalancer services in Kind. Any service that declares itself a load-balancer type can be managed by a (possibly platform-specific) controller (pod) that interacts with the docker daemon and the external host to expose the service on some port.

How is this less complex than "a complex service monitoring approach" ? And what do you mean by "in the cloud-controller-manager scheme" - are you suggesting that we implement a CCM provider?

Some variation on LoadBalancer probably makes sense long term but there are some additional considerations for the specific approach mentioned, EG that docker is not the only container runtime longterm #154 (nearterm? we have some progress). Running as a pod in the cluster may not be particularly helpful either.

A reverse tunnel / proxy similar to kubectl port-forward, cloudflare warp, or similar may be simpler and cheaper for many ports.

I'd love to see a good design for this written up.

A CCM provider is just as complex, didn't mean to suggest otherwise, but isolates/separates the complexity.

The problem with an actual CCM (besides complexity) is that we test at least 4 versions of Kubernetes (3 supported branches + master branch development), the CCM currently only supports -1/+0 version skew.
https://kubernetes.io/docs/setup/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager.

Additionally to my knowledge nobody has built a CCM without depending on k8s.io/kubernetes/.... We still need to cover our original use case of testing Kubernetes, for which depending on the main repo other than to consume build outputs is a bit problematic.

@BenTheElder

A reverse tunnel / proxy similar to kubectl port-forward, cloudflare warp, or similar may be simpler and cheaper for many ports.

I'd love to see a good design for this written up

The main reason I haven't moved forward is because, honestly, I don't see any use case on which using kubectl port-forward doesn't work

Maybe you recall I brought this use case because I wanted to use kind as development platform for the gardener project and needed to access services deployed in the cluster from the host. Recently, they did it, using kubectl port-forward

I think @liztio also had a use case, but said that using kubectl would be acceptable.

[noting that we discussed this a bit more in today's meeting, but the path forward is still not certain..]

After experimenting a bit with kubectl port-forward and with the docker link socat approaches, I find they work for all my immediate purposes. Many thanks for the tips, and apologies if I seemed uncharitable.

The external socat container that connects to nodeports is a little closer to the load-balancer idea than is port-forward. It might be reasonable to automate the creation of such external containers on the fly. Would probably want to use a particular network, c.f. https://github.com/kubernetes-sigs/kind/issues/273

My own case involves a handful of services (2-5) with known ports, easy to work with them without automation.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Hi all, I'll leave my two cents here. Why not using an ExtraPortMappings config in a similar way that we currently do for volume mounts (https://github.com/kubernetes-sigs/kind/blob/master/pkg/cluster/internal/create/nodes.go#L186). This way the user can specify a kind cluster config like this:

kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 8000
  extraMounts:
  - containerPath: /mounts
    hostPath: /var/lib/kind/mounts

This would translate the port mappings into -p <hostPort>:<containerPort args for the docker command in here https://github.com/kubernetes-sigs/kind/blob/master/pkg/cluster/nodes/create.go#L125.

basically this is what we discussed at the last meeting, in addition with
needing user specified labels & taints on nodes (so you can identify nodes
you added this to for multi-node).

expect this in the v0.5.0 timeframe probably, we have a lot of other
changes related to networking in flight at the moment (see the open PRs, a
few of them are doozies)

On Fri, Jun 21, 2019 at 10:53 AM Fernando Llaca notifications@github.com
wrote:

Hi all, I'll leave my two cents here. Why not using an ExtraPortMappings
config in a similar way that we currently do for volume mounts (
https://github.com/kubernetes-sigs/kind/blob/master/pkg/cluster/internal/create/nodes.go#L186).
This way the user can specify a kind cluster config like this:

kind: ClusterapiVersion: kind.sigs.k8s.io/v1alpha3nodes:

  • role: control-plane
    extraPortMappings:

    • containerPort: 80

      hostPort: 8000

      extraMounts:

    • containerPath: /mounts

      hostPath: /var/lib/kind/mounts

This would translate the port mappings into -p : args for the docker command in here
https://github.com/kubernetes-sigs/kind/blob/master/pkg/cluster/nodes/create.go#L125
.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/99?email_source=notifications&email_token=AAHADKY65OVGBTGVM7JQD5TP3UIQZA5CNFSM4GDBZCD2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYJET6A#issuecomment-504515064,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAHADK4IQR6ZTF2HIWJ7BPDP3UIQZANCNFSM4GDBZCDQ
.

er and v0.4.0 is due ~Monday with the last IPv6 PR outstanding still in particular..

thanks @BenTheElder ! If a have the time :crossed_fingers: I can try to craft a PR with an implementation of that ports mapping config to help a little

@BenTheElder I have opened this PR https://github.com/kubernetes-sigs/kind/pull/637 with a first implementation of https://github.com/kubernetes-sigs/kind/issues/99#issuecomment-504515064

alpha level support is in with https://github.com/kubernetes-sigs/kind/pull/637 by way of #654

How can I test my ingress services with kubectl port-forward

If I try kubectl port-forward ingress/local-ingress 80:80 for below ingress

NAME            HOSTS   ADDRESS   PORTS   AGE
local-ingress   *                 80      6s

I get the following error

error: cannot attach to *v1beta1.Ingress: selector for *v1beta1.Ingress not implemented

I haven't went throught all the replies but this could work as well:

  • Get kind control-plane docker IP
$ CONTROL_PLANE_IP=$(docker inspect <kind-clstr-1-control-plane> | jq -r '.[].NetworkSettings.Networks.bridge.IPAddress')
$ NODE_PORT=$(kubectl get svc <service_name> -o=json | jq '.spec.ports[].nodePort')
$ curl $CONTROL_PLANE_IP:$NODE_PORT
  • Access the service at <docker-ip>:<nodePort> from host machine

Extending kind to provide a support like minikube could be helpful:

$ kind --cluster <cluster-name> service list
|----------------|-----------------------|--------------------------------|
|   NAMESPACE    |     SERVICE NAME      |              URL               |
|----------------|-----------------------|--------------------------------|
| example        | example               | http://172.17.0.2:30980

@junaid-ali that does not work on docker for mac or docker for windows.

what is the current working solution on Windows to connect to my app deployed in Kind with NodePort ?

I support this feature as well, however, @nilebox a temporary workaround is to use alpine/socat to bind the ports from kind to your host if you need something right now

Thanks @andy9775 and @BenTheElder for explaining this.

Could you please elaborate on socat? I roughly understand what it's supposed to do (at least I think I do) but having a bit of trouble getting this setup. Closest docs I found was this. I'm on a Mac.

image

Current docs related to this implemented, cross platform supported feature:
https://kind.sigs.k8s.io/docs/user/configuration/#extra-port-mappings
https://kind.sigs.k8s.io/docs/user/ingress/

Thanks Ben. Indeed I've been referring to those docs today, thanks for making it easy to copy paste snippets out. ๐Ÿ‘

After some false starts I figured out the solution to my problem and posting here in case it helps anyone else.

In my case I only need to expose _one_ service and didn't want an Ingress.
Because by default NodePorts can only be in a certain range ("The range of valid ports is 30000-32767") it seems the main trick on this is to match the nodePort of the service with the containerPort in Kind's config.

Makes sense in retrospect but since the docs were Ingress oriented went on a bit of a roundabout journey. All's well in the end.
Thanks for making Kind, it's a cool project! ๐Ÿ™‡

Thanks Ben. Indeed I've been referring to those docs today, thanks for making it easy to copy paste snippets out. ๐Ÿ‘

After some false starts I figured out the solution to my problem and posting here in case it helps anyone else.

In my case I only need to expose _one_ service and didn't want an Ingress.
Because by default NodePorts can only be in a certain range ("The range of valid ports is 30000-32767") it seems the main trick on this is to match the nodePort of the service with the containerPort in Kind's config.

Makes sense in retrospect but since the docs were Ingress oriented went on a bit of a roundabout journey. All's well in the end.
Thanks for making Kind, it's a cool project! ๐Ÿ™‡

How can I set the valid port range to 30000-32767?
So I don't want to make settings for a single pod, I will use many different ports during the testing process and a random nodeport will be assigned.

Right, I'd like to forward all valid nodePort ports... not just a few. Is there any way to do this easily? I guess I could create a kind config yaml with 2768 ports in it :/

So if you're on linux you do not need extraPortMapping, you can instead reach the nodes by IP and nodeports should work fine.

If you're not on linux *, docker actually needs to run a proxy process on the host to implement the port mapping and these mappings are not free, I would not recommend mapping all of them, configuration aside.

* Actually for IPv6 it still uses a proxy process as well IIRC.

Thanks for the response. On Windows. I ended up setting the service-node-port-range to a smaller range (40 ports) and just forwarded that range of ports individually. I didn't need that many, though that's what I did to leave room to expand.

On a side note, k3d allows a port range to be specified, rather than having to specify individual ports.

Current docs related to this _implemented, cross platform supported_ feature:
https://kind.sigs.k8s.io/docs/user/configuration/#extra-port-mappings
https://kind.sigs.k8s.io/docs/user/ingress/

Thanks for the links!

extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
    protocol: TCP

Can this configuration extended to support port range instead of 1:1 mapping?

Another question: Is kubectl port-forward working for anyone on WSL2? I cannot access anything even i started port forwarding.. fixed. :|

For new features please file a new feature request issue. My notifications are a tire fire and I can more easily triage open issues (looking through the issue tracker) than keeping up with all the discussions on closed issues (going through broken spammed to death kubernetes GitHub notifications)

Much of the format of the config is based on the kubernetes CRI API to give us a common featureset we can support across backends. There's no such port range API type IIRC so someone will need to design our own (perhaps with only host ports and a 1:1 mapping to nodes like docker's CLI flag) and verify that podman etc. can support this. This is doable but should require a little bit of thought and research. If someone else wants to file an issue with a proposed design to discuss that would be great.

On Linux I still don't recommend using the feature at all and on Windows / Mac I don't think mapping this many poets will behave well but ...

Yeah, thanks for the response. On Linux NodePort should work without this right.. I'm on Windows .. so this was life saver. ๐Ÿš€

Was this page helpful?
0 / 5 - 0 ratings

Related issues

rajalokan picture rajalokan  ยท  3Comments

cjwagner picture cjwagner  ยท  3Comments

patvdleer picture patvdleer  ยท  4Comments

cig0 picture cig0  ยท  4Comments

mithunvikram picture mithunvikram  ยท  3Comments