BUG REPORT
OS: Windows 10
VM Driver: HyperV
ISO Version: 0.23.4
minikube 0.22.2
I mounted a host folder using the cmd line
minikube mount c:\minikubemount2:/mnt/laptop
Note that cmd line was an administrator powershell session
I checked by connecting to the vm in hyperv that the mount was present and I could inspect the files presented by the host filesystem (windows)
I created a persistent volume and claim and then attempted to create a simple redis (latest) container using this mount
When the pod attempts to start the container it fails - looking at the log I see chown errors eg
chown: changing ownership of './dump.rdb': Input/output error
chown: changing ownership of '.': Input/output error
The spec for the volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foovolume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: manual
hostPath:
path: "/mnt/laptop"
and the claim
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: redis-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /data
name: redis-storage
readOnly: false
I'm assuming this is something to do with file permissions/ownership getting mangled somehow -is there anything I can do to fix?
I'm having this exact same issue. Did you ever find any resolution?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
I'm having this exact same issue. Did you ever find any resolution?