K3s: 0.7.0-rc1: Failed to get Plugin from volumeSpec no volume plugin matched

Created on 28 Jun 2019  路  9Comments  路  Source: k3s-io/k3s

Describe the bug
Trying to mount a glusterFS volume inside a pod is stuck at "ContainerCreating"

K3s agent output on designated amd64 node:

Jun 28 09:30:30 <...omitted...> k3s[2875]: E0628 09:30:30.992477    2875 desired_state_of_world_populator.go:300] Failed to add volume "gluster-db-vol" (specName: "gluster-db-vol") for pod "2e972db1-9972-11e9-84ce-021bec7704fa" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "gluster-db-vol" err=no volume plugin matched

Am I missing some CSI or FlexVolume plugins?

To Reproduce

  • External glusterFS cluster 6.1
  • All k3s nodes has glusterFS client 6.1
  • all glusterFS volumes has "ctime off"
  • No PV and PVC is used
  • Volume is defined inside the pod spec

Expected behavior
Pod status available and internal mount path point to glusterFS volume

Additional context

  • glusterFS endpoints tagged in k3s with glusterfs-cluster
  • glusterFS service set in k3s
  • volume-test pod spec (See lines below):
apiVersion: v1
kind: Pod
metadata:
  name: volume-test
  labels:
    name: volume-test
spec:
  nodeSelector:
    kubernetes.io/arch: amd64
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    ports:
    - name: web
      containerPort: 80
# ######################################################## #
# uncomment the lines below only if PV and PVC spec exists #
# ######################################################## #
#    volumeMounts:
#    - name: gluster-db-vol
#      mountPath: /data
#      readOnly: false
#  volumes:
#  - name: gluster-db-vol
#    persistentVolumeClaim:
#      claimName: gluster-db-pvc
# ###################################################### #
# comment the lines below only if PV and PVC spec exists #
# ###################################################### #
    volumeMounts:
     - mountPath: "/data"
       name: gluster-db-vol
  volumes:
  - name: gluster-db-vol
    glusterfs:
      endpoints: glusterfs-cluster
      path: db
      readOnly: false

Most helpful comment

Faced the same problem.
I can mount glusterfs on k3s by the following method.

  1. Rebuild k3s binary with glusterfs in-tree driver (above commentted).
  2. Add glusterfs-client to k3s container image.

some diffs: (https://gist.github.com/yaamai/e24153e2d3bc137a975143ff7f5b7a1a)

All 9 comments

After looking into the docs, I could find more info on this issue:

kubelet has an argument that defines where to find the external drivers
--volume-plugin-dir string

Looking at the k3s tree I found $K3S_HOME/agent/kubelet/plugins

  • Replace string with "$K3S_HOME/agent/kubelet/plugins/volume/exec"
  • Add kubelet argument to --kubelet-arg="--address 0.0.0.0 --volume-plugin-dir $K3S_HOME/agent/kubelet/plugins/volume/exec"

NOTE:K3S_HOME is your data path, defaults to /etc/rancher/k3s

Next steps

... trying to find, compile and add a glusterFS driver...
... reconfigure K3s...
... deploy driver to all k3s agents...

Any luck with adding the glusterFS driver? I currently face the same problem and is currently doing the mount via. NFS.

@giminni the driver appears to be here

https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/glusterfs

but I have no idea how it would be compiled and added to k3s

Faced the same problem.
I can mount glusterfs on k3s by the following method.

  1. Rebuild k3s binary with glusterfs in-tree driver (above commentted).
  2. Add glusterfs-client to k3s container image.

some diffs: (https://gist.github.com/yaamai/e24153e2d3bc137a975143ff7f5b7a1a)

Faced the same problem.
I can mount glusterfs on k3s by the following method.

  1. Rebuild k3s binary with glusterfs in-tree driver (above commentted).
  2. Add glusterfs-client to k3s container image.

some diffs: (https://gist.github.com/yaamai/e24153e2d3bc137a975143ff7f5b7a1a)

Wow, that is really awesome. Do you think you could explain how to add the vendor dependencies? There is a real need for k3s with some software-based CSIs left in. I'm thinking glusterfs and ceph. Any chance of packaging these up somewhere? I wish the main k3s branch had a few build targets with different levels of leanness.

@TechnoTaff Afaik, those in-tree drivers are being deprecated and replaced by CSI drivers so I would understand if k3s doesn't ever support them:

It would be nice to have the in-tree drivers bundled in the meantime though.

@SerialVelocity thanks for the info, the current state of CSI/in-tree drivers seems a bit confusing. I guess we're paying the price for k8s being around a while now.

I've switched back to nfs-client auto-provisioning for now! It's a shame as I built a nice Ansible playbook for deploying glusterfs and heketi onto the slave nodes.

Also running into this issue preventing container creation in my setup, would be great if k3s could support glusterfs, even through an alternative build branch as suggested earlier.

Closing due to age.

Was this page helpful?
0 / 5 - 0 ratings