Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Please provide the following details:
Environment: Minikube on MacOS Mojave (10.14.2)
Minikube version : v0.33.1
What happened: Files present in hostPath volumes are not persisted after restart of minikube
What you expected to happen: Files present in hostPath volumes should be persisted after restart of minikube
How to reproduce it (as minimally and precisely as possible):
hostPath volumes and corresponding volume mountsminikube ssh and see that the files are present in the hostPath directory)minikube stopminikube start --vm-driver="virtualbox"Output of minikube logs (if applicable):
Anything else do we need to know: This was working perfectly fine until recently. I have upgraded MacOS, minikube (using brew cask), docker recently. Not sure which one could be the culprit ¯_(ツ)_/¯
Do you mind elaborating on your repro instructions? I'd like to see if this problem exists on other platforms, but am not yet familiar with hostPath volumes. Thanks!
Hi @tstromberg : thanks for taking a look. I have the following volume definition in my stateful set:
volumes:
- hostPath:
path: /mnt/qdata
type: DirectoryOrCreate
name: qdata
and one or more containers use that volume and mount it to a path inside the container:
volumeMounts:
- mountPath: /qdata
name: qdata
After I start a pod with the above, the software running in the pod writes files to /qdata directory. Anytime after this, if I restart minikube (as a result of putting my macbook to sleep) and/or explicitly restarting minikube, the pod is running again, and /qdata directory shows up in the pod (and inside minikube VM), but the files which were written are not present in the directory anymore. Hope this helps.
@tstromberg - Did you get a chance to look at this? do you need more info?
Minikube will only persist host paths located under /data, not anything located under e.g. /qdata
You could of course add your own mount or symlink, to move the data over to the /dev/sda1 disk ?
The dynamically provision volumes are also persisted, but the paths should be considered internal.
Same with the other default directories, those are mostly internal to the system or to the runtime...
Here is the findmnt output, excluding the overlay mounts since those are even more "internal":
TARGET SOURCE FSTYPE OPTIONS
/mnt/sda1 /dev/sda1 ext4 rw,rela
/var/lib/boot2docker /dev/sda1[/var/lib/boot2docker] ext4 rw,rela
/var/lib/docker /dev/sda1[/var/lib/docker] ext4 rw,rela
/var/lib/containers /dev/sda1[/var/lib/containers] ext4 rw,rela
/var/log /dev/sda1[/var/log] ext4 rw,rela
/var/tmp /dev/sda1[/var/tmp] ext4 rw,rela
/var/lib/kubelet /dev/sda1[/var/lib/kubelet] ext4 rw,rela
/var/lib/cni /dev/sda1[/var/lib/cni] ext4 rw,rela
/data /dev/sda1[/data] ext4 rw,rela
/tmp/hostpath_pv /dev/sda1[/hostpath_pv] ext4 rw,rela
/tmp/hostpath-provisioner
/dev/sda1[/hostpath-provisioner] ext4 rw,rela
/var/lib/rkt /dev/sda1[/var/lib/rkt] ext4 rw,rela
/etc/rkt /dev/sda1[/var/lib/rkt-etc] ext4 rw,rela
/var/lib/minikube /dev/sda1[/var/lib/minikube] ext4 rw,rela
/var/lib/minishift /dev/sda1[/var/lib/minishift] ext4 rw,rela
So the recommended location provided for persistent hostPath mounts is somewhere under /data.
See https://github.com/kubernetes/minikube/blob/master/docs/persistent_volumes.md
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I'm having this issue. After restart I lose data in my Postgres DB. Here's the deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: db
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.5
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: config-postgres
env:
- name: PGDATA
value: /var/lib/postgresql/data
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgresdb
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: pvc-postgres
And the volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: minikube-pv-postgres
namespace: db
labels:
app: minikube-pv-postgres
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
hostPath:
path: /data/minikube-pv-postgres/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
namespace: db
labels:
app: pvc-postgres
spec:
volumeName: minikube-pv-postgres
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
The headache is I specify hostPath at /data/ directory, but the data is still lost.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I'm having this issue. After restart I lose data in my Postgres DB. Here's the
deployment.yml:And the volumes:
The headache is I specify
hostPathat/data/directory, but the data is still lost.