EDIT by neolit123:
help-wanted
k/k issue of migrating flags:
https://github.com/kubernetes/kubernetes/issues/86843
the flags in this file have to be moved to KubeletConfiguration instead.
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/flags.go
the kubelet supports some flags as config fields and we should use them instead because the flags are deprecated:
https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
buildKubeletArgMap needs to include only flags that are unique and are not present in the config.
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0-rc.1", GitCommit:"8745ea56e3f1f3ad20050c1762eb6ba6f7786675", GitTreeState:"clean", BuildDate:"2018-06-20T23:58:14Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version):uname -a):kubeadm init bootstrap sets both the --resolv-conf flag in kubeadm-flags.env and the resolvConf entry in config.yaml
And to different entries: the flag to the external resolver file, and the yaml entry to the systemd stub resolver.
This generates a warning in the journal.
Just use the resolvConf in the yaml file and decide which resolver to use :-)
kubeadm init, and stop it during the certificate generation (would use alpha phase preflight master but it is still broken with flags and config files).
Using containerd
thanks, @NeilW
@stealthybox @detiber
we handle this special case:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/flags.go#L95
but to my understanding we don't have to write /etc/resolv.conf explicitly in the MasterConfig as that's already the default value in the kubelet. if we remove the default value this would silence the warning, until the user adds a value in the config again and the special case triggers too.
same goes for this (possible warning):
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/flags.go#L79
maybe we should do as @NeilW suggests and only write these values to the config?
is there any reason to write them to the ENV file?
+1 for only writing the value to the config file and not the env file.
Why we don't only rely on the value in the config file is because this is OS/machine-specific. If I do kubeadm init on Ubuntu 16.04 and join a 18.04 node, DNS resolution would be broken on 18.04 as it has a different resolv-conf path than /etc/resolv.conf because of systemd-resolved.
Does that make sense? Can we close this issue as "works as expected" or is there something funky with the actual implementation?
kubeadm init bootstrap sets both the --resolv-conf flag in kubeadm-flags.env and the resolvConf entry in config.yaml
This generates a warning in the journal.
^ the problem is just the warning here.
the default value for the kubelet is already /etc/resolv.conf, so maybe we can avoid writing it explicitly in the config? this way if the special case triggers for Ubuntu 18.04, we can only pass --resolv-conf.
This generates a warning in the journal.
What warning is that?
so maybe we can avoid writing it explicitly in the config
hmm, I think it's the safest way to rely on the default (we don't set this explicitely in kubeadm, this is generated from kubelet defaulting) and duplicate...
What warning is that?
i think it's:
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
@NeilW can confirm.
That's the one, and of course the issue of which resolver the system is actually using.
Is there a reason to use the deprecated flag and not just write the value in the config file?
If the flag is deprecated then at some point it will stop working won't it?
@mtaufen can comment if he's planning to remove the flags in favor for doing config-only for this.
This ties into whether we will (or even technically can) use dynamic kubelet configuration, we still need some small instance-specific config for the cluster not to break.
That's the one, and of course the issue of which resolver the system is actually using.
for now, the kubelet command line flags override the config. this is stated in the docs too.
so its the value from the ENV file that ends up being used.
Is there a reason to use the deprecated flag and not just write the value in the config file?
we can potentially override values in the config, too, i guess. but the user has to be notified in the lines of:
"hey, i see you have resolv-conf value of X, but that doesn't work on this setup, so we have to use Y instead"
If the flag is deprecated then at some point it will stop working won't it?
that's true. if we sync the kubelet / kubeadm efforts, the end user won't notice that on a new release - e.g. in 1.12. if kubeadm was out-of-tree and not bound to the k8s versioning we might have had a bigger problem.
/assign @neolit123
This ties into whether we will (or even technically can) use dynamic kubelet configuration, we still need some small instance-specific config for the cluster not to break.
I see what @luxas is getting at -- unless we have an alternate override mechanism, I'm unsure of how we could support this.
With DynamicKubeletConfig, are we still planning to do the versioned, cluster-wide configs?
If so, we'll need to have per-Node or NodeSelector based configs if we plan to use the API to represent overrides.
Similar to flags would be providing a host-local config file that merges over the dynamic one.
I think this also causes problems on Ubuntu 18.04. I just did a clean setup of Kubernetes with kubeadm 1.11.2 and both are set.
My /var/lib/kubelet/config.yaml points to resolvConf: /etc/resolv.conf, while /var/lib/kubelet/kubeadm-flags.env points to /run/systemd/resolve/resolv.conf. I think Kubelet was using /etc/resolv.conf. From the moment I copied my resolv.conf from /run/systemd/resolve/resolv.conf to /etc/resolv.conf my port-forward started working.
@MattiasGees
/etc/resolv.conf is the default value in the kubelet. not sure how that would happen since the kubelet CLI flags always override the config and whats inside /var/lib/kubelet/kubeadm-flags.env is passed as a CLI flag.
What takes precedence in the kubelet in 1.11? A cli flag or the config in /var/lib/kubelet/config.yaml? From my testing it seems to be /var/lib/kubelet/config.yaml.
@MattiasGees
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#start-a-kubelet-process-configured-via-the-config-file
Note that command line flags which target the same value as a config file will override that value. This helps ensure backwards compatibility with the command-line API.
for the next cycle we need to start thinking about deprecating the usage of CLI flags passed to the kubelet, because the flags are going away.
changing milestone
^ changing milestone again. we don't have bandwidth to move away from the kubelet deprecated flags this cycle. also it was discussed today that these flags might stay in the kubelet for a while before being removed.
/lifecycle frozen
Windows flag handling will overlap with this work.
/close
duplicate of https://github.com/kubernetes/kubeadm/issues/1135
@neolit123: Closing this issue.
In response to this:
/close
duplicate of https://github.com/kubernetes/kubeadm/issues/1135
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
erm actually i'm just going to rename 1135 as it's about --allow-privileged
work has started here https://github.com/kubernetes/kubernetes/issues/86843
so we might have to address this soon.
or at least the areas of kubeadm that are affected.
/cc