What does kubeadm do when upgrading plan?
[root@a02-r12-i187-2 bin]# ./kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.8.4
[upgrade/versions] kubeadm version: v1.9.3
[upgrade/versions] Latest stable version: v1.9.3
[upgrade/versions] FATAL: grpc: timed out when dialing
My host couldn't connect to dl.k8s.io, so I setup a nginx server as dl.k8s.io, but create only a file /release/stable.txt whose content is "v1.9.3", then kubeadm will connect to the nginx server and get latest stable version v1.9.3. However, Â another error appears: grpc: timed out when dialing. What should I do? Should kubeadm must be able to connect to dl.k8s.io?
Is there any workaround to solve this?
Oh, I know after I read the kubeadm source code, it's because kubeadm needs to get etcd version by using localhost:2379, but I change etcd's address to 172...*。
Same problem here. Tried to update the configmap with new etcd endpoints but I keep receiving the time out error.
same here...
same issue, which is also related to #727
/assign @detiber
/cc @liztio
let's chat more about the air-gapped upgrade scenarios
Kubeadm has a fallback code path for when dl.k8s.io cannot be reached; it will however still print a warning message that this happened. Previous issue on this topic is #498.
@timothysc @detiber @liztio did y'all chat about the air gapped upgrade scenarios?
I have the same issue by running "kubeadm upgrade plan". Does anyone have possible fixes ?
@FloMedja if you are seeing the grpc error, then I suspect it is an issue with contacting etcd... what version of kubeadm are you running? If you are running v1.10.x and are using an external etcd cluster, then you will most likely need the fix here: https://github.com/kubernetes/kubernetes/pull/63925
I'm hoping to be able to get that PR merged in time for the v1.10.3 release.
@detiber Thanks for the reply. I am using kubeadm v1.9.3 and my cluster is at v1.9.3 version too. i am trying to update kubeadm at 1.9.6 but has the same issue. We run etcd as static pod in our cluster . But we don't install etcd with kubeadm so i thing kubeadm considered it as external cluster.
We doing it because kubeadm v1.9.3 seem not to manage tls when install etcd. Do kubeadm v1.10 manage tls with etcd ?
Do you know a manner to verify the connectivity between kubeadm and etcd?
@FloMedja kubeadm 1.10.x initializes clusters that secure etcd with TLS using a separate CA
@stealthybox Thanks you. I will change my cluster creation to use kubeadm v1.10 :)
@stealthybox Does the documentation at this link https://kubernetes.io/docs/setup/independent/high-availability/ up to date or there are a new way to generate etcd CA cluster with kubeadm ?
@FloMedja kubeadm will start a single node master with Etcd in a static pod and generate CA's and certs for everything to communicate with just a normal kubeadm init
Multi-master is more involved, but you can still use kubeadm to generate all of the certs.
@detiber I updated my kubeadm to v1.10.3 which contain your PR ( kubernetes/kubernetes/pull/63925 ) i think. I am getting this when trying to run 'kubeadm upgrade plan' on a secure external etcd cluster :
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/plan] computing upgrade possibilities
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.10.2
[upgrade/versions] kubeadm version: v1.10.3
[upgrade/versions] Latest stable version: v1.10.3
[upgrade/versions] FATAL: context deadline exceeded
Do i miss something ?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/close
https://github.com/kubernetes/kubeadm/issues/1041 is the tracking issue for air-gapped
Most helpful comment
@FloMedja kubeadm 1.10.x initializes clusters that secure etcd with TLS using a separate CA