Kubeadm: isCoreDNSVersionSupported: avoid panics and wait for pending containers

Created on 28 Aug 2020  ยท  14Comments  ยท  Source: kubernetes/kubeadm

Following up from #2078:

[daniel@reef kured ]$ minikube start --vm-driver kvm2 --kubernetes-version 1.19.0
๐Ÿ˜„  minikube v1.12.3 on Ubuntu 20.04
โœจ  Using the kvm2 driver based on existing profile
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿƒ  Updating the running kvm2 "minikube" VM ...
๐Ÿณ  Preparing Kubernetes v1.19.0 on Docker 19.03.12 ...
๐Ÿคฆ  Unable to restart cluster, will reset it: addons: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 2
stdout:

stderr:
W0828 09:07:48.306773    5151 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.isCoreDNSVersionSupported(0x1d5b6a0, 0xc0001491e0, 0xc00060c000, 0xc0005b4000, 0x153)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:413 +0x38f
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon(0xc0005e4000, 0x7fb, 0xb5f, 0xc0008b1680, 0x309, 0x418, 0xc00080f800, 0x20a, 0x3dc, 0x1d5b6a0, ...)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:271 +0x187
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon(0xc0004f8da0, 0x1d5b6a0, 0xc0001491e0, 0xc0005ffafc, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:252 +0x505
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.EnsureDNSAddon(0xc0004f8da0, 0x1d5b6a0, 0xc0001491e0, 0x1d5b6a0, 0xc0001491e0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:109 +0x13e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runCoreDNSAddon(0x1a58420, 0xc00003c9a0, 0xc0004cdca0, 0xd)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/addons.go:92 +0x85
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc00061f600, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 +0x1e8
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0xc0006bcb40, 0xc0007f3ce0, 0x0, 0x2)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422 +0x6e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc0006bcb40, 0xc0003c6fc0, 0x0, 0x2, 0x726420, 0xc000521d48)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 +0x153
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).BindToCommand.func1.1(0xc0002b1080, 0xc0003c6fc0, 0x0, 0x2, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:348 +0xda
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0002b1080, 0xc0003c6f60, 0x2, 0x2, 0xc0002b1080, 0xc0003c6f60)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842 +0x47c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00038a840, 0xc00000e010, 0x1cecec0, 0xc00000e018)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x18249a0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x225
main.main()
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x25

๐Ÿ”Ž  Verifying Kubernetes components...
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
๐Ÿ„  Done! kubectl is now configured to use "minikube"
[daniel@reef kured ]$ minikube version
minikube version: v1.12.3
commit: 2243b4b97c131e3244c5f014faedca0d846599f5-dirty
[daniel@reef kured ]$ minikube kubectl -- version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
[daniel@reef kured ]$ 

This is with minikube 1.12.3, k8s 1.19.0 and a manual restart of minikube (because of https://github.com/kubernetes/minikube/issues/2874) as I wanted to test the new kured release.

_Originally posted by @dholbach in https://github.com/kubernetes/kubeadm/issues/2078#issuecomment-682420921_


related report report by @kvaps here:
https://github.com/kubernetes/kubernetes/issues/94286

What happened:

during update to v1.19 can not upgrade coredns deployment:

# kubeadm init phase addon coredns --config /config/kubeadmcfg.yaml
W0827 17:45:09.167031     251 kubelet.go:200] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': executable file not found in $PATH
W0827 17:45:09.167066     251 kubelet.go:213] cannot determine if systemd-resolved is active: no supported init system detected, skipping checking for services
W0827 17:45:09.475045     251 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase addon/coredns: unable to match SHA256 digest ID in ""
To see the stack trace of this error execute with --v=5 or higher

What you expected to happen:

cordns succefully upgraded

How to reproduce it (as minimally and precisely as possible)

try to apply my manifests: coredns.txt, and do:

kubeadm init phase addon coredns

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): v1.18.8 --> v1.19.0
  • Cloud provider or hardware configuration: kubefarm
  • OS (e.g: cat /etc/os-release): ubuntu 20.04.1
  • Kernel (e.g. uname -a): 5.4.0-42-generic
  • Install tools: kubeadm
  • Network plugin and version (if this is a network-related bug):
  • Others:

kinbug prioritimportant-longterm

Most helpful comment

All 14 comments

this suggest that kubeadm is try to obtain the container in a coredns Pod, but for some reason there are no containers.
https://github.com/kubernetes/kubernetes/blob/bf94f27e76c541db57098ed69aea47d889703669/cmd/kubeadm/app/phases/addons/dns/dns.go#L413

this should not happen unless something tempered with the coredns deployment.
any ideas or ways to debug why that is happening on the minikube side?

Update: After rebooting the main control plane server, and re-running kubeadm upgrade, upgrade was successful.


For what it's worth I'm currently experiencing this exact error on a production (on-premise) cluster that I'm working on upgrading from v1.18.8 to v1.19.0. I'm trying to figure out how to recover from this.

...
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.isCoreDNSVersionSupported(0x1d5b6a0, 0xc000580000, 0xc0007e4a00, 0xc000260b40, 0x130)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:413 +0x38f
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon(0xc0007f0000, 0x7fb, 0xb5f, 0xc00087c480, 0x309, 0x418, 0xc0007e8c00, 0x20a, 0x3dc, 0x1d5b6a0, ...)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:271 +0x187
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon(0xc0001446e0, 0x1d5b6a0, 0xc000580000, 0xc000b790dc, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:252 +0x505
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.EnsureDNSAddon(0xc0001446e0, 0x1d5b6a0, 0xc000580000, 0x8, 0x8)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:109 +0x13e
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformPostUpgradeTasks(0x1d5b6a0, 0xc000580000, 0xc0001446c0, 0x0, 0xc000841290, 0xc0001446c0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/postupgrade.go:133 +0x6a7
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runApply(0xc00003a680, 0xc000478650, 0x1, 0x1, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:170 +0x505
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdApply.func1(0xc000882b00, 0xc000478650, 0x1, 0x1, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:77 +0x48
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000882b00, 0xc000478600, 0x1, 0x1, 0xc000882b00, 0xc000478600)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842 +0x47c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000589b80, 0xc00000e010, 0x1cecec0, 0xc00000e018)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x18249a0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x225
main.main()
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x25


Full output

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0828 15:35:56.382175   29665 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /etc/resolv.conf
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.0"
[upgrade/versions] Cluster version: v1.18.8
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
Static pod: kube-apiserver-springfield hash: 7e941578df8c60ba024f55eaff9482e2
Static pod: kube-controller-manager-springfield hash: bbaf40983142d95e62d981c91c386acd
Static pod: kube-scheduler-springfield hash: c808ba8a724ff4e00643b5c4f7fc454b
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-springfield hash: 96eb0ccef62afbd6325cf94b6cfcc36e
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-28-15-36-37/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-springfield hash: 96eb0ccef62afbd6325cf94b6cfcc36e
Static pod: etcd-springfield hash: 15cd58333c2903224040663cb7ac361a
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests527562153"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-28-15-36-37/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-springfield hash: 7e941578df8c60ba024f55eaff9482e2
Static pod: kube-apiserver-springfield hash: 44f2a347608dcfe07f28916e1744cf13
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-28-15-36-37/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-springfield hash: bbaf40983142d95e62d981c91c386acd
Static pod: kube-controller-manager-springfield hash: 550f8349a20814efdf3fc91175ab2399
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-28-15-36-37/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-springfield hash: c808ba8a724ff4e00643b5c4f7fc454b
Static pod: kube-scheduler-springfield hash: 23d2ea3ba1efa3e09e8932161a572387
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.isCoreDNSVersionSupported(0x1d5b6a0, 0xc000580000, 0xc0007e4a00, 0xc000260b40, 0x130)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:413 +0x38f
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon(0xc0007f0000, 0x7fb, 0xb5f, 0xc00087c480, 0x309, 0x418, 0xc0007e8c00, 0x20a, 0x3dc, 0x1d5b6a0, ...)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:271 +0x187
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon(0xc0001446e0, 0x1d5b6a0, 0xc000580000, 0xc000b790dc, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:252 +0x505
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.EnsureDNSAddon(0xc0001446e0, 0x1d5b6a0, 0xc000580000, 0x8, 0x8)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:109 +0x13e
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformPostUpgradeTasks(0x1d5b6a0, 0xc000580000, 0xc0001446c0, 0x0, 0xc000841290, 0xc0001446c0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/postupgrade.go:133 +0x6a7
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runApply(0xc00003a680, 0xc000478650, 0x1, 0x1, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:170 +0x505
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdApply.func1(0xc000882b00, 0xc000478650, 0x1, 0x1, 0x0, 0x0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:77 +0x48
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000882b00, 0xc000478600, 0x1, 0x1, 0xc000882b00, 0xc000478600)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842 +0x47c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000589b80, 0xc00000e010, 0x1cecec0, 0xc00000e018)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x18249a0)
    /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x225
main.main()
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 +0x25

Quick update on my problem. It seems that a hard reboot of the server resolved the issue. Not sure why, but my guess is that some service failed to start properly or something along those lines. I needed to reboot anyway because of a kernel update, when I re-ran kubeadm upgrade the upgrade process seems to have completed successfully.

cc @rajansandeep
any idea why the coredns Pod can end up with no containers during upgrade.

This is what I did using minikube 1.12.3:

minikube start --vm-driver kvm2 --kubernetes-version 1.19.0
minikube kubectl -- apply -f https://raw.githubusercontent.com/dholbach/kured/k8s-1.19-testing/kured-rbac.yaml
minikube kubectl -- apply -f https://raw.githubusercontent.com/dholbach/kured/k8s-1.19-testing/kured-ds.yaml
# wait
minikube ssh
   sudo touch /var/run/reboot-required
# wait for reboot to happen (should be within 1 minute)
# this will cause minikube to stop (cf /kubernetes/minikube#2874)
minikube start --vm-driver kvm2 --kubernetes-version 1.19.0

During the second bring-up of minikube I saw the failure, but can not reproduce it now? :thinking:

On a second reboot I could trigger it just now.

Had the same issue as my worker nodes also had been cordoned, so no place to run coredns. uncordoned one and it went all smoothly

my master is running on a pi, where the workers are x86 (should not matter) and was upgrading from 1.18.3 to 1.19.0

we are discussing on the linked PR how to properly fix this:
https://github.com/kubernetes/kubernetes/pull/94299

this should be backported to older versions too.

updated this ticket to include the details from https://github.com/kubernetes/kubernetes/issues/94286

Great work everyone!

Was this page helpful?
0 / 5 - 0 ratings