Kubeadm: Possible flex volume implementation issue

Created on 28 Oct 2019  路  6Comments  路  Source: kubernetes/kubeadm

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT

Versions

kubeadm version (use kubeadm version): 1.15.4

Environment:

  • Kubernetes version (use kubectl version): 1.15.4
  • Cloud provider or hardware configuration: vSphere
  • OS (e.g. from /etc/os-release): Ubuntu 18.04.3 LTS
  • Kernel (e.g. uname -a): 4.15.0-65-generic #74-Ubuntu

What happened?

"kubeadm init" and "kubeadm join" create /usr/libexec/kubernetes/kubelet-plugins/volume/exec directory AFTER generating /etc/kubernetes/manifests/kube-controller-manager.yaml file.

Because of this, next time someone runs kubeadm init phase control-plane all or kubeadm join phase control-plane-prepare all there are unexpected changes to the /etc/kubernetes/manifests/kube-controller-manager.yaml file AND kube-controller-manager pods gets restarted due to that.

What you expected to happen?

I feel that "kubeadm init" and "kubeadm join" should create /usr/libexec/kubernetes/kubelet-plugins/volume/exec directory BEFORE generating /etc/kubernetes/manifests/kube-controller-manager.yaml file to avoid this issue.

I would also expect this directory to be removed during a kubeadm reset, but it isn't currently.

How to reproduce it (as minimally and precisely as possible)?

Run kubeadm init and you'll see that /usr/libexec/kubernetes/kubelet-plugins/volume/exec has been created, but the flex volume path does not show up in the newly created /etc/kubernetes/manifests/kube-controller-manager.yaml file.

Then run kubeadm init phase control-plane all and you'll see the kube-controller-manager.yaml file has changed to now include the mountpath.

root@kubeadm-test:~# grep plugins /etc/kubernetes/manifests/kube-controller-manager.yaml
root@kubeadm-test:~# kubeadm init phase control-plane all
I1028 14:02:57.947700   10516 version.go:248] remote version is much newer: v1.16.2; falling back to: stable-1.15
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
root@kubeadm-test:~# grep plugins /etc/kubernetes/manifests/kube-controller-manager.yaml
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
kinbug lifecyclactive prioritbacklog

Most helpful comment

ok, i was able to reproduce the problem with help from @krisdock

the following is not true.

kubeadm should always write this to the controller-manager YAML regardless of the folder existence:

this is exactly the opposite as can be seen here:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/controlplane/volumes.go#L74-L76

IMO we should always create this path if it's missing and mount it [1], because if it's missing the CM creates it regardless. which then creates the weird DIFF after the user calls phase control-plane after init, (which BTW is still a weird sequence of commands).

[1] i will send a patch for that.

this is a path managed by the kubelet and the controller-manager Pod and not kubeadm, thus kubeadm reset should not remove it's contents. at least that's my opinion.

also we should not clean it because it might contain user plugins.

All 6 comments

"kubeadm init" and "kubeadm join" create /usr/libexec/kubernetes/kubelet-plugins/volume/exec directory AFTER generating /etc/kubernetes/manifests/kube-controller-manager.yaml file.

EDIT: the folder is created only after the controller-manager Pod starts.

Because of this, next time someone runs kubeadm init phase control-plane all or kubeadm join phase control-plane-prepare all there are unexpected changes to the /etc/kubernetes/manifests/kube-controller-manager.yaml file AND kube-controller-manager pods gets restarted due to that.

this use case is not clear to me.
if you have called kubeadm init this already includes the control-plane phase, so you are executing the phase twice. could you clarify the reason for that?

Run kubeadm init and you'll see that /usr/libexec/kubernetes/kubelet-plugins/volume/exec has been created, but the flex volume path does not show up in the newly created /etc/kubernetes/manifests/kube-controller-manager.yaml file.

i just checked locally and calling kubeadm init and then calling kubeadm init phase control-plane does not update any YAML or restart components. the only reasons for a static pod restart would be if there is a diff between the old YAML (from init) and the new one from (init phase control-plane).

could you explain what do you mean by the following?

but the flex volume path does not show up in the newly created /etc/kubernetes/manifests/kube-controller-manager.yaml file.

kubeadm should always write this to the controller-manager YAML regardless of the folder existence:

  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir

I would also expect this directory to be removed during a kubeadm reset, but it isn't currently.

this is a path managed by the kubelet and the controller-manager Pod and not kubeadm, thus kubeadm reset should not remove it's contents. at least that's my opinion.

this is a path managed by the kubelet and the controller-manager Pod and not kubeadm, thus kubeadm reset should not remove it's contents. at least that's my opinion.

@ereslibre @yastij
WDYT?

ok, i was able to reproduce the problem with help from @krisdock

the following is not true.

kubeadm should always write this to the controller-manager YAML regardless of the folder existence:

this is exactly the opposite as can be seen here:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/controlplane/volumes.go#L74-L76

IMO we should always create this path if it's missing and mount it [1], because if it's missing the CM creates it regardless. which then creates the weird DIFF after the user calls phase control-plane after init, (which BTW is still a weird sequence of commands).

[1] i will send a patch for that.

this is a path managed by the kubelet and the controller-manager Pod and not kubeadm, thus kubeadm reset should not remove it's contents. at least that's my opinion.

also we should not clean it because it might contain user plugins.

/lifecycle active
the fix is here: https://github.com/kubernetes/kubernetes/pull/84468

after the user calls phase control-plane after init, (which BTW is still a weird sequence of commands).

@neolit123 It isn't so weird if you consider the use case. :)
Since this Bug was filed on my behalf, I can try to shed some light on how it was discovered. (1) We were already running a Kubernetes Cluster for some time. (2) At some point we got a new requirement for enabling kube-apiserver audit logs, which required updating the original kubeadm config file and re-running kubeadm init phase control-plane all or kubeadm join phase control-plane-prepare all. (3) Upon updating the kubeadm config file and re-running the above commands to pickup the changes we fully expected that ONLY kube-apiserver.yaml manifest would have changes and be restarted, BUT noticed that additionally kube-controller-manager.yaml also had changes and was restarted, which was completely unexpected!

P.S. Thanks for the fix in kubernetes/kubernetes#84468.

@alex-vmw ideally we would like to implement alternative methods for modifying a running cluster and it's components (perhaps something with better UX), but for now this is a viable use of the control-plane phase.

Was this page helpful?
0 / 5 - 0 ratings