Kubeadm: Cannot join another master: unable to fetch the kubeadm-config ConfigMap

Created on 7 Dec 2018  路  8Comments  路  Source: kubernetes/kubeadm

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT ?

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

What happened?

I follow this document: https://kubernetes.io/docs/setup/independent/high-availability/
Joining the second master node to my new cluster worked fine. But joining the third node fails:

kubeadm join <api-endpoint>:6443 --token yyy.zzz --discovery-token-ca-cert-hash sha256:xxx --experimental-control-plane -v 8

I get this message:

...
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "<api-endpoint>:6443"
[discovery] Successfully established connection with API Server "<api-endpoint>:6443"
I1207 14:24:23.820759   27024 join.go:608] [join] Retrieving KubeConfig objects
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1207 14:24:23.821596   27024 round_trippers.go:383] GET https://<api-endpoint>:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config
I1207 14:24:23.821621   27024 round_trippers.go:390] Request Headers:
I1207 14:24:23.821646   27024 round_trippers.go:393]     Accept: application/json, */*
I1207 14:24:23.821662   27024 round_trippers.go:393]     User-Agent: kubeadm/v1.13.0 (linux/amd64) kubernetes/ddf47ac
I1207 14:24:23.821679   27024 round_trippers.go:393]     Authorization: Bearer yyy.zzz
I1207 14:24:23.822613   27024 round_trippers.go:408] Response Status: 401 Unauthorized in 0 milliseconds
I1207 14:24:23.822656   27024 round_trippers.go:411] Response Headers:
I1207 14:24:23.822676   27024 round_trippers.go:414]     Content-Type: application/json
I1207 14:24:23.822695   27024 round_trippers.go:414]     Content-Length: 129
I1207 14:24:23.822711   27024 round_trippers.go:414]     Date: Fri, 07 Dec 2018 13:24:23 GMT
I1207 14:24:23.822751   27024 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized

In the apiserver log, I see this line:

E1207 13:32:02.496984       1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, invalid bearer token]]

kubectl -n kube-system get cm kubeadm-config -oyaml works fine on the same host.

What you expected to happen?

The node should join the other two masters.

How to reproduce it (as minimally and precisely as possible)?

Setup a new cluster, create Master, join a second master, then join a third master.

Does the join token expire?

areHA prioritawaiting-more-evidence

Most helpful comment

Indeed, creating a new token with kubeadm token create helped, I now have 3 happy master nodes.

Thanks!

All 8 comments

@MartinEmrich sometimes issue related to tokens when joining nodes are due to problems on the kube-controller-manager. Could you check if everything is working fine on your bootstrap node?

@fabianofranz kube-controller-manager looks harmless... It logs nothing during the join attempt.

TTL for the token should be 24h.
did you verify that the pods for the second CP were fully created before joining the third?

@neolit123 I guess that's it. I set up the master the day before, around the same time.

Didn't know the token expires so fast. How can I refresh it?

have a look at the kubeadm token commands - e.g. create.
please feel free to re-open this issue if needed.

thanks
/close

@neolit123: Closing this issue.

In response to this:

have a look at the kubeadm token commands - e.g. create.
please feel free to re-open this issue if needed.

thanks
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Indeed, creating a new token with kubeadm token create helped, I now have 3 happy master nodes.

Thanks!

TTL for the token should be 24h.
did you verify that the pods for the second CP were fully created before joining the third?

You save my life. Thank you :-x
FYI, we can use the option --ttl=0 _( ttl=0 means, generated token will never expire. )_

Was this page helpful?
0 / 5 - 0 ratings