Minikube: Set default location for PV mounts

Created on 8 Nov 2018  路  15Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? Feature request/query


Minikube version (use minikube version): v0.30.0

  • OS: Arch Linux
  • VM Driver: None

The minikube docs list a number of locations that Minikube can provision persistent volumes, as well as sample config for a PV, but there doesn't seem to be any way to configure the storage-provisioner to provision volumes anywhere but /tmp/hostpath-provisioner. Unless I'm misunderstanding the source, it looks like the path in tmp is fixed and unparameterised. Is there some way of providing a patched storage-provisioner to minikube with a path of the user's choice? Our use case involves persisting a lot of large data files, for which a tmpfs-based PV quickly becomes impractical.

aremount good first issue help wanted kinfeature prioritbacklog 2019q2

All 15 comments

Thank you for filing! This should be doable by parametrizing https://github.com/kubernetes/minikube/blob/v0.30.0/pkg/storage/storage_provisioner.go#L49.
PRs are welcome.

Hi, would be interested in picking up this as my first issue.

I've few clarifications:

  • As I understand, even though, minikube doc specifies supported directory list inside VM, it always provisions PV on /tmp/hostpath-provisioner. How do we plan to configure a different path? Is that to be inferred from storage class parameters?
  • Should we only allow the paths listed in the doc?

Implementing the parameterization of the provisioner is very simple, but I found that the provisioner's yaml file mounts the /mnt directory into pod, and the yaml file cannot be dynamically modified. In addition, the parameterization of the provisioner should be implemented using the minikube config, which should involve the question of how the product is designed. @balopat @ms-choudhary

Both /tmp/hostpath-provisioner and /tmp/hostpath_pv are actually stored on the disk /dev/sda1 :

| |-/tmp/hostpath_pv                /dev/sda1[/hostpath_pv]                        ext4      rw,relatime,data=ordered
| `-/tmp/hostpath-provisioner       /dev/sda1[/hostpath-provisioner]               ext4      rw,relatime,data=ordered

So it is only the mountpoints that are kept on /tmp. But it would be nice to be able to configure this...

For large data, it would even be nice to be able to add extra disks. Maybe this could be considered.

e.g. keep the containers on /dev/sda1 and the volumes on /dev/sda2 ?

Currently we are using /data as the canonical (non-dynamic) host path.

|-/data                             /dev/sda1[/data]                               ext4      rw,relatime,data=ordered

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

This issue still exists in minikube v1.6 AFAIK.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/lifecycle frozen

Hi everyone,
Is there any update on this issue?

Thanks,
Marjan

I will pick this up.

/assign

/remove-lifecycle frozen

Was this page helpful?
0 / 5 - 0 ratings