We would like to be able to attach a persistent volume (in our case an EFS volume) that is shared across multiple workspaces on Che. This would allow us to share static assets that are stored on an NFS drive with all workspaces of the same project. We work on large web projects where GBs of media needs to be mounted into the workspace.
It would be great to be able to define this in the devfile, for example:
volumes:
- claimName: pvc-name
containerPath: "/home/user/media"
We thought of using the Kubernetes custom resource section of the devfile but this doesn't seem to work.
In summary, it would be amazing if we could just attach an existing Kubernetes PersistentVolumeClaim to all workspaces, specifying the subpath in the volume to use in the devfile.
This would make a huge difference to our developers performance as we rely on database backed media storage at the moment to share it amongst workspaces.
@davidwindell that sounds like an interesting use case. Would you like to propose a patch?
I wonder if user may want to reconfigure source for all components, like editor/plugins. If yes then what should be a better format:
components:
- id: eclipse/che-theia/7.10.0
volumes: #Here are overrides for https://github.com/eclipse/che-plugin-registry/blob/master/v3/plugins/eclipse/che-theia/7.8.0/meta.yaml#L59
- name: projects
containerPath: /projects
subfolder: /${workspaceName}/projects
claimName: my-claim #probably must be the same across devfile for all `projects` volumes
- name: plugins
containerPath: /plugins
subfolder: /${workspaceName}/plugins
claimName: my-claim
- id: redhat/java/latest
volumes:
- name: projects
containerPath: /projects
subfolder: /${workspaceName}/projects
- name: plugins
containerPath: /plugins
subfolder: /${workspaceName}/plugins
- type: dockerimage
volumes:
- name: projects
containerPath: /projects
subfolder: /${workspaceName}/projects
components:
- id: eclipse/che-theia/7.10.0
type: cheEditor
- id: redhat/java/latest
type: chePlugin
- type: dockerimage
volumes:
- name: projects
containerPath: /projects
volumes: #We know which volumes are used in our workspace and tune that here for all components
- name: project
pvcSource:
pvcName: projects
subfolder: /${workspaceName}/projects
I don't have Java skills to contribute a patch but I like either options as long as it's possible to add an additional volume (beyond the WS project volume - we don't want to share that).
I'm also interested in this issue, in my case, I want to use vendor software already installed in the machine, which size is several GB. I want to share it with the workspace, which contains additional configurations and software that are included in a docker image.
I was also interested specifically in NFS PVC. I hope this gets attention.
Is there a way to associate UID's with Keycloak users so each users container runs with the appropriate NFS permissions?
@skabashnyuk any chance of getting this one a priority?
Thank you, @davidwindell, @skabashnyuk, @amisevsk for adding the request.
I too have similar requirement - there are several GBs of files to be shared between all workspace containers.
Kindly let us know the priority of this feature request.
I'm not familiar with PV and PVC concepts, but if someone can help me, i shall try to implement the changes needed. Can someone help me?
My suggestion would be that the admin would have to pre-create the PV and PVC, then all that would be required would be something like this targeting the PVC name:
volumes:
- claimName: pvc-name
containerPath: "/home/user/media"
I would also include a accessMode field in this case, in case the requirement is e.g. sharing a ROX volume.
@l0rd adding devex team for setting the right priority. Atm the issue is open for devs and contributions are most welcome cc: @davidwindell @Rucadi
Thank you, @ibuziuk , @skabashnyuk for the updates and changing the proprity to P1.
I'm revisiting this after a long time.
Are any of the experts working on it currently?
(I had a look at the codebase, but didnt have any clue on where to make the changes :-))
I agree "NFS" is a good choice, any ideas on using "Local volumes" instead?
If there are multiple pods trying to access the huge file system in NFS, is there a possibility for file access speed slowing down ?
Will "local volume" help in that scenario ?
Most helpful comment
I would also include a
accessModefield in this case, in case the requirement is e.g. sharing aROXvolume.