What would you like to be added:
A Load Balancer
Why is this needed:
https://github.com/kubernetes-sigs/kind/issues/411#issuecomment-510480326
It will be interesting to describe better the use cases, cc: @Percylau
/kind design
See previous discussions including https://github.com/kubernetes-sigs/kind/pull/691#issuecomment-510557520
Docker for Linux you can deploy something like metallb and have fun today. To make something portable we ship by default with kind, you will need to solve the networking problems on docker for windows, Mac etc. And design it such that we can support EG ignite or kata later..
This is in the backlog until someone proposes and proves a workable design.
/priority backlog
see also though in the meantime: https://mauilion.dev/posts/kind-metallb/
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Another interesting project https://github.com/alexellis/inlets-operator#video-demo
It has some examples with kind. https://github.com/alexellis/inlets-operator#run-the-go-binary-with-packetcom
/remove-lifecycle stale
@BenTheElder Hey Ben - do you think any ETA for this feature can be set? I wonder whether I can try to help here.
There is no ETA because it needs a workable design to be agreed upon. So far we don't have one.
This is another work around https://gist.github.com/alexellis/c29dd9f1e1326618f723970185195963
hehe I think this is the simplest and bash scriptable one
# expose the service
kubectl expose deployment hello-world --type=LoadBalancer
# assign an IP to the load balancer
kubectl patch service hello-world -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
# it works now
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort fd00:10:96::3237 <none> 8080:32677/TCP 13m
hello-world LoadBalancer fd00:10:96::98a5 172.31.71.218 8080:32284/TCP 5m47s
kubernetes ClusterIP fd00:10:96::1 <none> 443/TCP 22m
wow, still simpler
kubectl expose deployment hello-world --name=testipv4 --type=LoadBalancer --external-ip=6.6.6.6
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort fd00:10:96::3237 <none> 8080:32677/TCP 27m
hello-world LoadBalancer fd00:10:96::98a5 172.31.71.218 8080:32284/TCP 20m
kubernetes ClusterIP fd00:10:96::1 <none> 443/TCP 37m
testipv4 LoadBalancer fd00:10:96::4236 6.6.6.6 8080:30164/TCP 6s
and using this script to set the ingress IP (see comment https://github.com/kubernetes-sigs/kind/issues/702#issuecomment-573641883)
https://gist.github.com/aojea/94e20cda0f4e4de16fe8e35afc678732
@aojea That's not a load balancer, external IP can be set regardless of service type. If load balancer controller is active, the ingress entries should appear in the service status field.
hi @adampl thanks for clarifying it, let me edit the comment
For me I'd love a similar solution to minikube tunnel. I test multiple services exposed via an istio's ingress-gateway and use DNS for resolution with fixed ports. The DNS config is automated because after running minikube.tunnel my script grabs the external IP and updates the DNS records.
First you have to solve the docker for Mac / Linux issue that the VM in
which containers run has no IP and containers have no reachable IPs from
the host (only port forward).
At this time I'm not aware of a clean solution.
On Linux you don't need much, the node containers are routable out of the
box, if you want you can add a route to the service cidr via a node.
On Mon, Feb 10, 2020, 06:39 tshak notifications@github.com wrote:
For me I'd love a similar solution to minikube tunnel. I test multiple
services exposed via an istio's ingress-gateway and use DNS for resolution
with fixed ports. The DNS config is automated because after running
minikube.tunnel my script grabs the external IP and updates the DNS
records.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/702?email_source=notifications&email_token=AAHADK5MG5GJFP4TNQNTAQDRCFRL7A5CNFSM4IBFVRJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELIXWRA#issuecomment-584153924,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADK57A6EAMK7CBTNSL73RCFRL7ANCNFSM4IBFVRJA
.
Er docker for Mac / Windows*
We don't have any control over that, and it seriously limits our options
On Mon, Feb 10, 2020, 08:24 Benjamin Elder bentheelder@google.com wrote:
First you have to solve the docker for Mac / Linux issue that the VM in
which containers run has no IP and containers have no reachable IPs from
the host (only port forward).At this time I'm not aware of a clean solution.
On Linux you don't need much, the node containers are routable out of the
box, if you want you can add a route to the service cidr via a node.On Mon, Feb 10, 2020, 06:39 tshak notifications@github.com wrote:
For me I'd love a similar solution to minikube tunnel. I test multiple
services exposed via an istio's ingress-gateway and use DNS for resolution
with fixed ports. The DNS config is automated because after running
minikube.tunnel my script grabs the external IP and updates the DNS
records.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes-sigs/kind/issues/702?email_source=notifications&email_token=AAHADK5MG5GJFP4TNQNTAQDRCFRL7A5CNFSM4IBFVRJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELIXWRA#issuecomment-584153924,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADK57A6EAMK7CBTNSL73RCFRL7ANCNFSM4IBFVRJA
.
@aojea and I briefly discussed some prototypes for this, but not ready to move on anything yet.
we link to the metallb guide here https://kind.sigs.k8s.io/docs/user/resources/#how-to-use-kind-with-metalllb
fwiw metallb also runs some CI with kind last I checked, but, linux only still.
see also though in the meantime: https://mauilion.dev/posts/kind-metallb/
Provided Info is a bit outdated now.
This is how I managed to get it working on latest version:
$ cat << EOF | kind create cluster --image kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# 1 control plane node and 3 workers
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
md5-692865d73fcf647bbdebb0366891e598
On first install only
md5-0d6de359ce9dd3d73dc12355dd2bd892
```bash
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.1-172.20.255.250
EOF
md5-043ceb7216d09838dc158d1d5faa6b59
```bash
$ kubectl expose replicaset echo --type=LoadBalancer
md5-e996391a697efc7d1eb36dd52848e568
```bash
$ curl http://172.20.255.1:8080
@BenTheElder @rubensa
I've been using this for 6-7 months now and it's been working pretty well for me.
-- https://github.com/Xtigyro/kindadm
If you are trying to get this working on docker for windows (probably will work for mac to)
very similar to @rubensa 's comment https://github.com/kubernetes-sigs/kind/issues/702#issuecomment-624561998
except for the address you need
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 127.0.0.240/28
EOF
source: https://medium.com/@JockDaRock/kubernetes-metal-lb-for-docker-for-mac-windows-in-10-minutes-23e22f54d1c8
and then you can expose the service via
kubectl port-forward --address localhost,0.0.0.0 service/echo 8888:8080
may update my fork of @Xtigyro 's repo with the setup once I get it working properly with that
update: did it https://github.com/williscool/deploy-kubernetes-kind
adding to what @rubensa posted, this will auto-detect the correct address range for your Kind network:
network=$(docker network inspect kind -f "{{(index .IPAM.Config 0).Subnet}}" | cut -d '.' -f1,2)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- $network.255.1-$network.255.250
EOF
Most helpful comment
Provided Info is a bit outdated now.
This is how I managed to get it working on latest version: