What happened:
After install kind using kind create cluster --config config.yaml
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
networking:
apiServerAddress: "127.0.0.1"
apiServerPort: 6443
podSubnet: "10.240.0.0/16"
serviceSubnet: "10.0.0.0/16"
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraMounts:
- hostPath: C:\local_deployment_files
containerPath: /files
extraPortMappings:
- containerPort: 30000
hostPort: 80
listenAddress: "127.0.0.31"
protocol: TCP
- containerPort: 30011
hostPort: 5001
listenAddress: "127.0.0.31"
protocol: TCP
- containerPort: 30012
hostPort: 5006
listenAddress: "127.0.0.31"
protocol: TCP
everything is working as expected.
once i restarted the KinD container or restarted docker-desktop
the ports are not bind anymore.
What you expected to happen:
The port will be bind as configured.
How to reproduce it (as minimally and precisely as possible):
Create new KinD cluster with extraPortMappings.
restart docker-desktop
Environment:
docker-desktop community 2.5.0.1
kind version): kind v0.9.0 go1.15.2 windows/amd64kubectl version): v1.19.3docker info): 19.03.13/etc/os-release): Windows 10 19041Can you debug more about what is and isn't working here?
Otherwise this just sounds like a bug in docker, kind is only responsible for telling docker to do the mapping, docker is responsible for actually binding ports 🤷♂️
These are mapped to -p https://docs.docker.com/config/containers/container-networking/
This bug does not exist when using docker on other platforms as far as I can tell. I don't know that we can do anything about this here.
It looks like after the docker-desktop restart, the pods up time is still counting like there was no reset. Is KinD keeps it persist?
After redeploy the pods again everything is working.
Kind uses persistent storage for pods yes.
Thanks.
Redeploy my deployment fixed that for me.
I'm closing this issue.
Thanks very much!
I wonder if the deployment is stuck in some other way such as using cached DNS results for the node IPs?
It's still possible something is broken here, but it's not clear _where_.
Yes agreed. I will continue investigating this.
And update once i got something.
Thanks