BUG REPORT
kubeadm version (use kubeadm version): 1.12.1
Environment:
When attempting to join a worker node that lacks a default route to an existing cluster by executing the kubeadm join ... command output from kubeadm token create <token> --print-join-command, the join fails because it can't deduce a valid API server bind address. These lines are emitted on stderr from the kubeadm join ... command:
common.go:168] WARNING: could not obtain a bind address for the API Server: no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"; using: 0.0.0.0
cannot use "0.0.0.0" as the bind address for the API Server
The VerifyAPIServerBindAddress(...) check was introduced to NewJoin(...) by 682b1b3.
The kubeadm join ... command for a worker node should succeed because it doesn't need to identify an API server bind address.
At first glance, my expectation is that the VerifyAPIServerBindAddress(...) check in NewJoin(...) should be conditional based on if internalcfg.ControlPlane == true.
On control plane node:
$ kubeadm token create <token> --print-join-command
On candidate worker node:
$ kubeadm join <apiserver> --token <token> --discovery-token-ca-cert-hash <hash> # output from above 'kubadm token create' command
/kind bug
@kennethredler are you working on this issue?
While I am working on it and hope to share soon, @yagonobre, I'm not yet allowed to contribute.
Therefore, I would welcome anyone's efforts toward achieving this.
And hopefully I can jump in ASAP.
@kennethredler @yagonobre
When attempting to join a worker node that lacks a default route
i don't think we should support such nodes. it was discussed with @kad when doing a recent refactor.
@kennethredler i think a good question here is why is the worker node not having a default route?
When attempting to join a worker node that lacks a default route to an existing cluster
digging a bit here, we seem to execute the same logic for both control-plane nodes joining a cluster and regular nodes:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/config/joinconfiguration.go#L41
we might have to decouple that and also branch out some of the calls in:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/join.go#L291
and all the way to here:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/validation/validation.go#L73
cc @fabriziopandini @kad for feedback.
...why is the worker node not having a default route? - @neolit123
Defense in depth. Only routing packets explicitly as intended.
Minimum network routing configuration considered most secure.
/kind feature
/remove-kind bug
in the kubeadm office hours it was decided that this can go in k8s 1.14. contributions are welcome!
(added help-wanted label), instructions here)
I'll work on it
@yagonobre thanks, noted!
my suggestion would be to file the PR later in the 1.13 cycle so that you don't have to rebase multiple times.
/lifecycle active
With the same kubeadm version, I have this question too. Is this the right flannel.yml I applyed url:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
I remember making it right not long ago, but I repeat today ,it goes wrong.
@lancedang
no it has a bug, use the steps here:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
(search for flannel)
/assign
I've ~not yet~ tested, ~but~ this commit ~may have~ addressed the issue.
@kennethredler @yagonobre
what is the state of this ticket? can we close it?
I'll validate this tonight
I believe this issue was addressed in the commit I linked above.
I've tested kubeadm join for worker nodes on 1.13.0 & 1.13.1 and it works without default route and excluding the --apiserver-advertise-address flag.
Thanks to @fabriziopandini for the fix!
Nice! Thanks @kennethredler
/close
@yagonobre: Closing this issue.
In response to this:
Nice! Thanks @kennethredler
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@yagonobre thanks, noted!
my suggestion would be to file the PR later in the 1.13 cycle so that you don't have to rebase multiple times.
/lifecycle active