BUG REPORT
kubeadm version (use kubeadm version): 1.14+ (8b98e802eddb9f478ff7d991a2f72f60c165388a)
vagrant@master1:~$ sudo kubeadm init join phasee
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
...
vagrant@master2:~$ sudo kubeadm join phasee
[discovery.bootstrapToken: Invalid value: "": using token-based discovery without caCertHashes can be unsafe. Set unsafeSkipCAVerification to continue, discovery.bootstrapToken.token: Invalid value: "": the bootstrap token is invalid, discovery.bootstrapToken.apiServerEndpoints: Invalid value: "phasee": address phasee: missing port in address, discovery.tlsBootstrapToken: Invalid value: "": the bootstrap token is invalid]
Error out with phasee is not a valid command for join or init.
Build current master kubeadm and write the above commands.
At first I thought https://github.com/kubernetes/kubernetes/commit/1b0ba920feb4cc34b53f69a1df2a8a5cd34f6fe1#diff-54e72ef55188be02a15b883fb02e238eR312 was the culprit of this behavior, but I tried getting back the Args: cobra.NoArgs and I'm still seeing the same issue.
Besides, from what I understand cobra.NoArgs here would suppossedly covering us from things like kubeadm join phase wrong-phase-name (as in: wrong-phase-name is interpreted as phase argument), but this is in reality failing properly as of now:
vagrant@master1:~$ sudo kubeadm init phase wrong-phase-name
use this command to invoke single phase of the init workflow
Usage:
kubeadm init phase [command]
...
vagrant@master1:~$ sudo kubeadm join phase wrong-phase-name
use this command to invoke single phase of the join workflow
Usage:
kubeadm join phase [command]
...
So this certainly needs more investigation.
On a second thought I don't think we can fix sudo kubeadm join phasee in any case, as phasee will be interpreted as the master location for kubeadm join <master>
/assign
/lifecycle active
I'll investigate and give feedback here.
cc
Commented on the PR as well:
phasee is a valid host name, so args validation pass, but discovery will fail. We should make sure the error message is clear to the user (wrong master address)
No args currently works for generated phases sub commands, but this should be changed differentiating from no-leaf phases (where noargs/no flags apply) and leaf phases (where args or flags apply)
the whole trouble was cause by:
kubeadm join 1.2.3.4:6443
it should have been a flag from the start:
kubeadm join --endpoint=1.2.3.4:6443
we can still do this using a deprecation process for GA, by supporting both that flag and the RAW argument for a while. the RAW handling of this arg is quite bad, IMHO.
We are supposed to have a single argument for join. If multiple ones are provided, then it's quite possible that there is a typo in the word phase. In that case we can try to turn the warning message here to an error message to reflect that. Not sure what deprecation policies will apply to this though.
Doing this will solve most of our problems except the trivial kubeadm join phasee only problem. To solve that, we may just try to connect to the API server as early as possible during join.
I wrote some thoughts in https://github.com/kubernetes/kubernetes/pull/73420#issuecomment-458639817
For allowing positional args in phases leafs I think we can do something similar to: https://github.com/kubernetes/kubernetes/compare/master...ereslibre:phases-cleanup-proposal
It would basically be a case-by-case basis, so one phase can specify that it requires N exact arguments, or N max arguments etc.
This issue is not well defined.
Closing for now.
Most helpful comment
We are supposed to have a single argument for
join. If multiple ones are provided, then it's quite possible that there is a typo in the wordphase. In that case we can try to turn the warning message here to an error message to reflect that. Not sure what deprecation policies will apply to this though.Doing this will solve most of our problems except the trivial
kubeadm join phaseeonly problem. To solve that, we may just try to connect to the API server as early as possible during join.