Hi!
we are planning to use Kind as part of our development environment in our laptops, the idea is to use it in combination with Skaffold. We are finding some difficulties to make connections from the application running in pods inside Kind to the database and queues running in the host machine (usually also in a another container).
To give you some context, ee are basically using 2/3 approaches:
Approach 3. is fine, but it gets a bit more complicated, specially if data is wanted to be persisted (it can be done with persistent volumes and kind node mounts), as opposed to have these dependencies running in simple docker-compose files. Of course we have tried running our app also in docker-compose (since Kind doesn't aim to be an alternative to docker-compose), but using Kind allows us to make our local environment look closer to our pre-production and production deployments, and to test some features that are K8s-related (in our case: an embedded distributed cache (Hazelcast) that uses K8s API for peers discovery).
Do you have any recommendation for this use case?
Thanks for this amazing project
I came with a working experimental approach, but requires some hacks and modifications in Kind. Steps:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.16.3
extraDockerOptions:
- --network
- integration-tests
- role: worker
image: kindest/node:v1.16.3
extraDockerOptions:
- --network
- integration-tests
("integration-tests" is a pre-existing docker network: docker network create integration-tests)
This will make the loop found at https://github.com/kubernetes-sigs/kind/pull/484#issuecomment-489414664 happen, next steps are tricks to workaround it:
In my experiment, deployed a socat daemonset with hostNetwork: true to expose the docker embedded dns on the k8s nodes IPs:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: socat-dns
namespace: kube-system
labels:
k8s-app: socat-dns
spec:
selector:
matchLabels:
name: socat-dns
template:
metadata:
labels:
name: socat-dns
spec:
hostNetwork: true
containers:
- name: socat
image: alpine/socat
args:
- tcp-listen:5353,reuseaddr,fork
- tcp:127.0.0.11:53
````
4. Now Modify CoreDNS deployment adding...:
```yaml
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
# old config:
# forward . /etc/resolv.conf
forward . {$HOST_IP}:5353 {
force_tcp
}
And profit! With this setup, it allows you to resolve names of containers attached to the docker network from inside Kind pods! :D :
docker run -d --network integration-tests --rm --name redis redis
$ kubectl run my-shell --rm -i --tty --image alpine -- sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # nc -vz redis.integration-tests 6379
redis.integration-tests (172.24.0.3:6379) open
For steps 3. 4. and 5. I made some custom manifests with those modifications to apply easily with kubectl apply -f coredns-deploy.yaml -f coredns-configmap.yaml -f socat-ds.yaml.
I'm not super happy with the approach..., it's kinda hacky, but it's an alternative 馃し鈥嶁檪 , maybe it can bring some ideas for the conversation at https://github.com/kubernetes-sigs/kind/issues/148 馃 .
Do you think extraDockerOptions could be a good feature for Kind? It would allow great flexibility for adventurous users :P. If so, I'm very happy to elaborate it and PR it.
PS: Sorry for the long text.
Removing the forward breaks upstream DNS..?
I am working on docker networks support, please see other issues in the tracker for the problems with doing this properly. I'm currently developing a prototype.
We are not accepting docker flags anywhere in the API surface, we do not wish to be coupled to docker's CLI externally in any way and have discussed this in the past.
This should be possible without user defined networks IIRC, I will have to dig out the details.
host.docker.internal should be possible to mimic on linux
Removing the forward breaks upstream DNS..?
It doesn't remove it, it replaces the /etc/resolv.conf based forward with an IP:port forward (using the hostIP and port 5353, which is tunneled to 127.0.0.11:53 using socat in the host container(the kind node ) ). The problem with the original /etc/resolv.conf in CoreDNS was that it contains "nameserver 127.0.0.11", mounted from the host kind container. Since it is a loopback address, CoreDNS detects it as a loop and crashes. My experiment "tricks" that by using the hostIP, avoiding the loop. I tested that both kube-dns and docker embedded dns can be used, and also public dns resolution (like google.com). Anyway this was experimenting playing around, not happy with it as a solution.
We are not accepting docker flags anywhere in the API surface, we do not wish to be coupled to docker's CLI externally in any way and have discussed this in the past.
馃憤
host.docker.internal should be possible to mimic on linux
I will explore a bit more 馃
Finally I found a way to achive this with qoomon/docker-host:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dockerhost
labels:
k8s-app: dockerhost
spec:
replicas: 1
selector:
matchLabels:
k8s-app: dockerhost
template:
metadata:
labels:
k8s-app: dockerhost
spec:
containers:
- name: dockerhost
image: qoomon/docker-host
securityContext:
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
# Not needed in MacOs:
- name: DOCKER_HOST
value: 172.17.0.1 # <-- docker bridge network default gateway
---
apiVersion: v1
kind: Service
metadata:
name: dockerhost
spec:
clusterIP: None # <-- Headless service
selector:
k8s-app: dockerhost
From inside a pod running in Kind:
/ # curl dockerhost:8080 -I
HTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Tue, 24 Dec 2019 11:56:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
Connection: keep-alive
ETag: "5dd3e500-264"
Accept-Ranges: bytes
Thanks @BenTheElder for pointing me into another direction! :) Do you think this trick is worthy to be added to the user guide documentation ("Accessing host machine ports")?
In any case, I think we can close this question, thx for the assistance!.
@fllaca how exactly did you get the final solution to work? I did a k apply -f on your post above and tried to see if that worked, but failed. does it require the extra docker options?
edit:
ah.... found it. I kept docker bridge network default gateway empty and it worked. I'm on macOS, so I guess it's different.
think adding this into the docs would be great.
hi @s12chung !
Yes, I had to set DOCKER_HOST when using a linux host. In MacOs, qoomon/dockerhost uses the special DNS name "host.docker.internal" to forward the traffic, so DOCKER_HOST mustn't be set.
I think it is possible to get the same result, by only allowing the ports in the firewall, without the need to use a container to NAT connections to the docker default gateway.
I have success with this:
testlocal container: myapp exposing port 8081
Allow port in the firewall
iptables -I INPUT -p tcp --dport 8081 -j ACCEPT
now from inside container test in kind cluster, you can run:
curl 172.17.0.1:8081
where 172.17.0.1 is your docker bridge default gateway.
Most helpful comment
Finally I found a way to achive this with qoomon/docker-host:
From inside a pod running in Kind:
Thanks @BenTheElder for pointing me into another direction! :) Do you think this trick is worthy to be added to the user guide documentation ("Accessing host machine ports")?
In any case, I think we can close this question, thx for the assistance!.