BUG REPORT
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version):uname -a):$ kubeadm alpha phase kubeconfig user --org dev --client-id bob --apiserver-advertise-address=k8sapi.dev --apiserver-bind-port=6443
couldn't use "k8sapi.dev" as "apiserver-advertise-address", must be ipv4 or ipv6 address
I expect to be able to pass a string for this as the apiserver will be behind an lb most of the time.
See above.
couldn't use "k8sapi.dev" as "apiserver-advertise-address", must be ipv4 or ipv6 address
it's just the ipv4 / ipv6 verification process.
it expects and IP not a domain.
wouldn't it be possible for you pass the IP of the LB?
No it needs to be the fqdn as the kubeconfig it mints needs to point at the fqdn of the lb.
This value is kind of overloaded, unfortunately. Both the apiserver and the controller-manager assume that this value is an IP in their configuration. I'm not entirely sure why, but given Chesterton's fence I'm wary of trying to change that.
What is required here, @mauilion? Do you just want to change the value in the kubeconfig? or is the idea to place apiserver behind an LB for the entire cluster as well?
Some experimentation notes:
If you provide a completely invalid --apiserver-advertise-address, kubeadm init fails entirely. However, if you provide a valid, alternate address (I used 127.1.2.3), the node's eth0 ip is what ends up in the kubeconfig. In fact, the specified IP appears nowhere in any of the configmaps or files in /etc/kubernetes. This makes me think this value is only ever used transiently as part of bootstrap.
Code archaeology: The changelog for k8s 1.10 mentions a --apiserver-advertise-dns-address, but the PR referenced ultimately rejected that flag in favour of a config object change: https://github.com/kubernetes/kubernetes/pull/59288#issuecomment-363522776
The config file works. @mauilion:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
cert-sans: ec2-54-205-250-176.compute-1.amazonaws.com
controlPlaneEndpoint: ec2-54-205-250-176.compute-1.amazonaws.com
invoked with:
kubeadm init --config cfg.yaml
Should produce the kubeconfig you expect:
$ grep ec2 /etc/kubernetes/admin.conf
server: https://ec2-54-205-250-176.compute-1.amazonaws.com:6443
I think that resolves the mentioned issue. Please re-open if it doesn't!
I'm trying to figure this out a bit too. There confusion for me comes from the fact that the flag --apiserver-advertise-address is for the InitConfiguration.LocalAPIEndpoint.AdvertiseAddress which is for
// LocalAPIEndpoint represents the endpoint of the API server instance that's deployed on this control plane node
// In HA setups, this differs from ClusterConfiguration.ControlPlaneEndpoint in the sense that ControlPlaneEndpoint
// is the global endpoint for the cluster, which then loadbalances the requests to each individual API server.
It seems to me that the kubeadm alpha phase kubeconfig user command should be allowing you set a flag for the control plane endpoint since you are concerned about getting a kubeconfig that can talk to the cluster as a whole; you don't care about a specific IP address of an individual node.
So in my opinion the spirit of this ticket is correct that this command should let you specify a string/dns name but I actually think that rather than changing any code flow, the flag should just change. Instead of setting --apiserver-advertise-address there should be a flag for --controleplane-endpoint . I think that avoids complications from Chesterton's fence and since this is an alpha command I don't think it should be a problem to change the flag. It would function effectively the same but be more useful for everyone.
@chuckha @vincepri
Let's reopen this and put it in the backlog.
I believe this is a historical problem. The kubeadm kubeconfig command existed before multi-control-plane installations were Easy. Now that they are we should support them by allowing users to generate kubeconfigs that point to the LB in front of all the control planes nodes.
As of can of today the control-plane endpoint can be set on via config for the whole init workflow (top level command and all phases)
If we we adding a flag IMO we should do this consistently in the same perimeter, not for a single phase
So you鈥檙e suggesting adding the config flag here as well @fabriziopandini ? I鈥檇 be on board for that rather than trying to add a control plane one everywhere else. 馃憤
I think we want some way for a user to know if a command takes a config or not. We discussed that kubeadm has two responsibilities the other day, one for managing local configuration and one for managing cluster configuration.
I'm almost starting to think kubeadm should be two commands or have some obvious way for a user to know if the command they are running takes a config or not.
Consistency is key for a good UX here. I was surprised that this phase didn't take a config flag.
100% agreed with Chuck. It would seem kubeadm is blurring the lines of the principle of single responsibility here. Splitting out might be the answer.
@chuckha @rdodev @johnSchnake I'm sorry if my explanation was not clear
This issue as far I understand is for adding a new --control-plane-address flag; what I'm saying is:
@fabriziopandini It does seem like we're talking around each other.
This issue as far I understand is for adding a new --control-plane-address flag; what I'm saying is:
We all can agree to this now. The "problem" gets solved either way but I agree that relying on the --config flag would be more consistent/appropriate.
all the phases already take the --config flag
This command doesn't take the --config flag which surprised @chuckha and led to the discussion of splitting up some kubeadm responsibilities. Am I misunderstanding your point here?
I am less versed in kubeadm in general but it seems like a trivial change to add the --config flag which would solve this problem and then a larger discussion could take place about splitting kubeadm up if that was desirable.
Kubeconfig phase already takes the config flag. Might be go code explain this better than my English
https://github.com/kubernetes/kubernetes/blob/b5b627d522ed74549e2583b258a7df0e3ffd33ea/cmd/kubeadm/app/cmd/phases/kubeconfig.go#L114
ah, sorry about that. I missed the config in the help first time around. IMO this ticket can actually be closed then.
ok, this was very confusing. There are two commands that are almost redundant.
kubeadm init phase kubeconfig and kubeadm alpha phase kubeconfig user.
The first one takes a config file and the second one does not. They are not exactly the same but they are very close and I think this issue can be closed because the former does take and respect a config file while the latter does not.
/close
@chuckha: Closing this issue.
In response to this:
ok, this was very confusing. There are two commands that are almost redundant.
kubeadm init phase kubeconfigandkubeadm alpha phase kubeconfig user.The first one takes a config file and the second one does not. They are not exactly the same but they are very close and I think this issue can be closed because the former does take and respect a config file while the latter does not.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.