What would you like to be added:
I would like to add host.docker.internal to the kubeadm certSAN list.
This is both usefull for Macos and Windows.
I think I found the place that would need to be changed:
See SourceGraph
Why is this needed:
So I can connect to the Kind cluster from within another container (same host, but not inside the cluster).
At this point you get a warning if you change https://localhost:<port> to https://host.docker.internal:<port>:
Unable to connect to the server: x509: certificate is valid for traefik-control-plane, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, not host.docker.internal
There are at this point two ways to get around the above error:
--insecure-skip-tls-verify.# kind-config.yml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatchesJson6902:
- group: kubeadm.k8s.io
version: v1beta1
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: host.docker.internal
@BenTheElder asked my to create an issue for the above, see our Slack chat: https://kubernetes.slack.com/archives/CEKK1KTN2/p1558816949030500
@Ilyes512 bear in mind that you can connect directly to the cluster API from the new container, i.e. obtaining the container/node ip address to avoid to use the host.docker.internal.
There is one patch that allows you to get the internal kubeconfig https://github.com/kubernetes-sigs/kind/pull/478 that you can use once is merged instead of replacing localhost with host.docker.internal
@Ilyes512 bear in mind that you can connect directly to the cluster API from the new container, i.e. obtaining the container/node ip address to avoid to use the host.docker.internal.
+1 or you can add a new container directly to the control-plane's network stack by pass --network to docker run
@aojea:
The --internal flag looks good and would indeed work for my purpose.
@tao12345666333:
you can add a new container directly to the control-plane's network stack by pass --network to docker run
How would I do this? I know I can create a new network and add both the control-plane and the container to it so I can connect to the control-plane by hostname. Not sure how I can do that without doing this though?
@Ilyes512 the kubeconfig that you have on your host is the same that the nodes have but replacing the internal ip address and port by localhost and the forwarded port.
You can obtain it with docker exec kind-control-plane sh -c 'cat /etc/kubernetes/admin.conf' and use in other containers
How would I do this? I know I can create a new network and add both the control-plane and the container to it so I can connect to the control-plane by hostname. Not sure how I can do that without doing this though?
just like:
(MoeLove) ➜ ~ docker run --rm -d redis
b6a9a6076bd9e9c70818eb5690dadc9de39fb082d26bcf38cd834e73e8dc6639
d% (MoeLove) ➜ ~ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b6a9a6076bd9 redis "docker-entrypoint.s…" 5 seconds ago Up 3 seconds 6379/tcp practical_blackburn
(MoeLove) ➜ ~ docker run --rm -it --network container:b6a9a6076bd9 redis sh
# redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>
# hostname
b6a9a6076bd9
#
--network container:<Your Control Plane Container ID>
Should I close this? docker exec kind-control-plane sh -c 'cat /etc/kubernetes/admin.conf' will do the trick until the --internal flag is added.
@BenTheElder I think that you fixed this with #573
https://github.com/kubernetes-sigs/kind/pull/478 adds an equivilant to https://github.com/kubernetes-sigs/kind/issues/566#issuecomment-496516602 :sweat_smile: