BUG REPORT
kubeadm version (use kubeadm version):
Environment:
kubectl version): v1.17.0-alpha.0.422+41a4d87fb8a048-dirtyuname -a):After applying a custom configuration to CoreDNS, the service reloads but the pods are not ready bacause there is no endpoint listening and the readiness probe fails.
CoreDNS should reload and works normally
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-d4b9d4d8b-vgsdl 1/1 Running 0 35m
coredns-d4b9d4d8b-x5ccc 1/1 Running 0 35m
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
data:
Corefile: |
.:53 {
ready
errors
health
kubernetes cluster.local internal in-addr.arpa ip6.arpa {
pods insecure
}
prometheus :9153
cache 30
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
---
EOF
2019-08-21T17:13:08.695Z [INFO] plugin/reload: Running configuration MD5 = f1ce4426d7c1de8525926200dd31a2d4
[INFO] Reloading complete
NAME READY STATUS RESTARTS AGE
pod/coredns-d4b9d4d8b-vgsdl 0/1 Running 0 39m
pod/coredns-d4b9d4d8b-x5ccc 0/1 Running 0 39m
ready endpoint (8181 in our case)root@kind-control-plane:/# curl http://[fd00:10:244::2]:8181/ready
curl: (7) Failed to connect to fd00:10:244::2 port 8181: Connection refused
However, health endpoints works
root@kind-control-plane:/# curl http://[fd00:10:244::2]:8080/health
OK
kubectl -n kube-system delete pods -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-d4b9d4d8b-28fwk 1/1 Running 0 9h fd00:10:244:0:1::f kind-worker <none> <none>
coredns-d4b9d4d8b-wlmf4 1/1 Running 0 9h fd00:10:244:0:2::a kind-worker2 <none> <none>
root@kind-worker:/# curl [fd00:10:244:0:1::f]:8181/ready
OK
/cc @neolit123
@aojea
Apply a custom CoreDNS configuration kubectl apply -f custome-coredns.yaml
please share your custome-corends.yaml.
if i delete my coredns 1.5.0 pods they are recreated correctly, so it feels like this is a problem with the custom config or just a coredns problem and not a kubeadm one. :)
@aojea
i was able to reproduce the problem.
what fix are you proposing in the Corefile of kubeadm?
if this cannot be fixed in kubeadm it should be reported to the coredns maintainers (and this ticket closed).
@chrisohaver @rajansandeep
hi, @aojea discovered a problem with coredns reload ^
should we be using 1.5.2 in kubeadm instead of 1.5.0 for 1.16?
xref: https://github.com/kubernetes-sigs/kind/pull/799#issuecomment-523922175
@neolit123 yes, we intend to include CoreDNS v1.6.2 for k8s 1.16.
I鈥檝e opened https://github.com/kubernetes/kubernetes/issues/81810 for the gcr.io image.
thanks for the confirmation @rajansandeep !