Kubeadm: [update coredns to 1.6.2] 1.5.0 doesn't restart after configuration reload

Created on 23 Aug 2019  路  6Comments  路  Source: kubernetes/kubeadm

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

Environment:

  • Kubernetes version (use kubectl version): v1.17.0-alpha.0.422+41a4d87fb8a048-dirty
  • Cloud provider or hardware configuration: kind
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others: CoreDNS-1.5.0

What happened?

After applying a custom configuration to CoreDNS, the service reloads but the pods are not ready bacause there is no endpoint listening and the readiness probe fails.

What you expected to happen?

CoreDNS should reload and works normally

How to reproduce it (as minimally and precisely as possible)?

  1. Deploy a kind cluster with master and check that everything is correct
kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-d4b9d4d8b-vgsdl                      1/1     Running   0          35m
coredns-d4b9d4d8b-x5ccc                      1/1     Running   0          35m
  1. Apply a custom CoreDNS configuration
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
data:
  Corefile: |
    .:53 {
        ready
        errors
        health
        kubernetes cluster.local internal in-addr.arpa ip6.arpa {
           pods insecure
        }
        prometheus :9153
        cache 30
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
---
EOF
2019-08-21T17:13:08.695Z [INFO] plugin/reload: Running configuration MD5 = f1ce4426d7c1de8525926200dd31a2d4
[INFO] Reloading complete
  1. CoreDNS should reload after some time and pods not become ready
NAME                                             READY   STATUS    RESTARTS   AGE
pod/coredns-d4b9d4d8b-vgsdl                      0/1     Running   0          39m
pod/coredns-d4b9d4d8b-x5ccc                      0/1     Running   0          39m
  1. Check ready endpoint (8181 in our case)
root@kind-control-plane:/# curl http://[fd00:10:244::2]:8181/ready
curl: (7) Failed to connect to fd00:10:244::2 port 8181: Connection refused

However, health endpoints works

root@kind-control-plane:/# curl http://[fd00:10:244::2]:8080/health
OK
  1. After deleting the pods they are recreated and they work, the 8181 endpoint is answering this time

kubectl -n kube-system delete pods -l k8s-app=kube-dns

NAME                                         READY   STATUS    RESTARTS   AGE   IP                       NODE                 NOMINATED NODE   READINESS GATES
coredns-d4b9d4d8b-28fwk                      1/1     Running   0          9h    fd00:10:244:0:1::f       kind-worker          <none>           <none>
coredns-d4b9d4d8b-wlmf4                      1/1     Running   0          9h    fd00:10:244:0:2::a       kind-worker2         <none>           <none>
root@kind-worker:/# curl [fd00:10:244:0:1::f]:8181/ready
OK

Anything else we need to know?

kinbug prioritimportant-soon sinetwork

All 6 comments

/cc @neolit123

@aojea

Apply a custom CoreDNS configuration kubectl apply -f custome-coredns.yaml

please share your custome-corends.yaml.

if i delete my coredns 1.5.0 pods they are recreated correctly, so it feels like this is a problem with the custom config or just a coredns problem and not a kubeadm one. :)

@aojea

i was able to reproduce the problem.
what fix are you proposing in the Corefile of kubeadm?

if this cannot be fixed in kubeadm it should be reported to the coredns maintainers (and this ticket closed).

@chrisohaver @rajansandeep

hi, @aojea discovered a problem with coredns reload ^
should we be using 1.5.2 in kubeadm instead of 1.5.0 for 1.16?

xref: https://github.com/kubernetes-sigs/kind/pull/799#issuecomment-523922175

@neolit123 yes, we intend to include CoreDNS v1.6.2 for k8s 1.16.
I鈥檝e opened https://github.com/kubernetes/kubernetes/issues/81810 for the gcr.io image.

thanks for the confirmation @rajansandeep !

Was this page helpful?
0 / 5 - 0 ratings