Kubespray: CoreDNS version mismatch for kubespray 2.14.1 and Kubernetes v1.19.2

Created on 1 Oct 2020  路  5Comments  路  Source: kubernetes-sigs/kubespray

Kubespray v2.14.1 has version 1.6.7 baked into it's role defaults for CoreDNS
This will cause any upgrades from v1.18x to v.19.x to fail and result in crashloopbackoff for coredns pods.

roles/download/defaults/main.yml:coredns_version: "1.6.7"
roles/download/defaults/main.yml:coredns_image_tag: "{{ coredns_version }}"

Kubeadm v1.19.2 requires CoreDNS 1.7.0 to function due to it introducing the max_concurrent configflag into the coredns configmap.

Kubeadm will deploy the following configmap for CoreDNS when using the upgrade-cluster.yml which will result in CoreDNS 1.6.7 being unable to start caused by the config option max_concurrent

  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }

CoreDNS configmap for 1.6.7 looks like this:

  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          prefer_udp
        }
        cache 30
        loop
        reload
        loadbalance
    }
        cache 30
        loop
        reload
        loadbalance
    }

upgrade-cluster.yml and cluster.yml appear to behave differently during an upgrade scenario in regards to kubeadm, as upgrading a cluster with the latter will succeed while using CoreDNS 1.6.7.

Either both upgrade-cluster.yml and cluster.yml need to behave identically for their CoreDNS deployment or the component version needs to be updated for Kubernetes v1.19.x+.

My workaround for this issue in regards to using upgrade-cluster.yml was to overwrite the coredns_version default variable provided by ./roles/download/defaults/main.yml inside inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml.

Related release notes for Kubernetes and CoreDNS:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#feature
https://coredns.io/2020/06/15/coredns-1.7.0-release/

kinbug

Most helpful comment

That is indeed a bit misleading, the hashes might be supported or not, but they are indeed untested at all.
eg: Only after tagging 2.14.x we set the Kubernetes version in master to 1.19.x and started to fix issues with 1.19.

If there isn't anything in the doc about this you are right, we should add something somewhere 馃憤

I will have a more thorough look through the docs and double check. At least I didn't see anything in the obvious places.

If I don't find anything related I'll add a PR to propose a change to the docs.

All 5 comments

@5-sigma I don't really get it.
2.14.1 is not fit for 1.19.x, the target kubernetes version is 1.18.x and the default is 1.18.9.

If you really want to check a codebase with 1.19, for now you should check master (and coreDNS is indeed updated to 1.7.0 because of the issue you mentioned)

@floryut thank you for taking the time to respond.

I might have been wrong with my assumption that included component hashes within a release of kubespray would also be supported.

I think this assumption would have been true up until kubespray release 2.12 where I can start finding component versions higher than what is mentioned as supported inside the README.md.

If included component hashes should not be considered as an indication for kubespray support it might be warranted to mention this either inside the defaults.yml of the download role or the README.md to avoid confusion.

@floryut thank you for taking the time to respond.

I might have been wrong with my assumption that included component hashes within a release of kubespray would also be supported.

I think this assumption would have been true up until kubespray release 2.12 where I can start finding component versions higher than what is mentioned as supported inside the README.md.

If included component hashes should not be considered as an indication for kubespray support it might be warranted to mention this either inside the defaults.yml of the download role or the README.md to avoid confusion.

That is indeed a bit misleading, the hashes might be supported or not, but they are indeed untested at all.
eg: Only after tagging 2.14.x we set the Kubernetes version in master to 1.19.x and started to fix issues with 1.19.

If there isn't anything in the doc about this you are right, we should add something somewhere 馃憤

That is indeed a bit misleading, the hashes might be supported or not, but they are indeed untested at all.
eg: Only after tagging 2.14.x we set the Kubernetes version in master to 1.19.x and started to fix issues with 1.19.

If there isn't anything in the doc about this you are right, we should add something somewhere 馃憤

I will have a more thorough look through the docs and double check. At least I didn't see anything in the obvious places.

If I don't find anything related I'll add a PR to propose a change to the docs.

That is indeed a bit misleading, the hashes might be supported or not, but they are indeed untested at all.
eg: Only after tagging 2.14.x we set the Kubernetes version in master to 1.19.x and started to fix issues with 1.19.
If there isn't anything in the doc about this you are right, we should add something somewhere 馃憤

I will have a more thorough look through the docs and double check. At least I didn't see anything in the obvious places.

If I don't find anything related I'll add a PR to propose a change to the docs.

That'd very welcome indeed

Was this page helpful?
0 / 5 - 0 ratings