Minikube: Directories provisioned by hostPath provisioner are only writeable by root

Created on 20 Sep 2017  路  24Comments  路  Source: kubernetes/minikube

Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.21.0

  • OS (e.g. from /etc/os-release): ubuntu 17.04
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.20.0.iso
  • Install tools:
  • Others:

What happened:

  1. I provision a PVC
  2. It is dynamically bound to a hostPath volume by the minikube provisioner
  3. A pod is created that mounts the PVC
  4. The process in the pod is running as uid 1000, with fsgid 1000 too
  5. The process can not write to the PVC mount, since it is only writeable by root

Since we don't want to allow escalating privileges in the pod, we can't use the PVC mount at all.

What you expected to happen:

Some way of specifying in the PVC what uid / gid the hostPath should be owned by, so we can write to it.

How to reproduce it (as minimally and precisely as possible):

kubectl apply -f the following file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: "0"
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  volumes:
    - name: test 
      persistentVolumeClaim:
        claimName: test
  containers:
  - image: busybox:latest
    name: notebook
    volumeMounts: 
     - mountPath: /home/test
       name: test
    command: ["/bin/sh", "-c", "touch /home/test/hi"]
  securityContext:
    fsGroup: 1000
    runAsUser: 1000

It fails with the following output:

touch: /home/test/hi: Permission denied

If you set the fsGroup and runAsUser to 0, it succeeds.

aremount kinbug lifecyclrotten

Most helpful comment

I think so @AkihiroSuda - The only workaround I found was to grant my USER sudo privileges in order to chown the mount at runtime, which pretty much negates the point of using a non-root user.

All 24 comments

Perhaps an annotation for the PVC that sets ownership? Or mode?

According to https://github.com/kubernetes/minikube/blob/cc64fb0544fd03e7ad4662e02eb6d08cae296f5f/pkg/localkube/storage_provisioner.go#L72 it looks like the pvc should be created with 0777 permissions, but in reality:

$ ls -lhsd pvc-d55626b9-9e3b-11e7-a572-08002772c173/
4.0K drwxr-xr-x 2 root root 4.0K Sep 20 19:46 pvc-d55626b9-9e3b-11e7-a572-08002772c173/

Am convinced now this is because the process runs by default with umask 022, and so 0777 gets set as 0755 instead.

We could drop umask to 0000 just before this call and then restore it afterwards

What about doing something like this:
https://github.com/kubernetes/kubernetes/blob/9e223539290b5401a3b912ea429147a01c3fda75/pkg/volume/util/atomic_writer.go#L360

where we'd set the permissions via a call to chmod after creation, rather than setting/resetting the process-level umask?

Thanks for figuring this out, by the way!

That works too, and might be better than fiddling with umask (since umask is process wide afaik)! I'll amend the patch later today.

@dlorenc np, and thanks for reviewing the PR so quickly!

This was supposed to fix the fsGroup compatibility, but doesn't seem to.

minikube v0.24.1

With the following securityContext

      securityContext:
        fsGroup: 20003

and the following PVC template

  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi

the host directories are still created with the following mode
drwxr-xr-x 2 root root 4096 Dec 1 16:47 /tmp/hostpath-provisioner/pvc-541614b7-d6b7-11e7-a722-36d29dc40439

I'm seeing the same issue running minikube version: v0.24.1. I'm dynamically creating a couple of PVCs/PVs when launching a StatefulSet. This, in turn, is using the default storage provisioner (k8s.io/minikube-hostpath).

@dlorenc @yuvipanda would it be possible to reopen this issue?

I suspect I've bumped into the same issue seen by @yuvipanda.

The process I followed is slightly different but the end results are the same: a hostPath volume is created in /tmp/hostpath-provisioner with permissions that deny write access to processes in containers that run with a non-root id.

  • Minikube version v0.24.1
  • OS Ubuntu 16.04.2 LTS
  • VM Driver virtualbox
  • ISO version minikube-v0.23.6.iso
  • Helm v2.6.1

What happened:

  1. I started minikube
minikube start
  1. I initialised Helm
helm init
  1. I installed the Redis Helm chart:
helm install stable/redis --name=my-release

Ultimately the Redis pod failed to startup. The pod logs contained something like this:

Welcome to the Bitnami redis container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues
Send us your feedback at [email protected]

nami    INFO  Initializing redis
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/redis'

The Docker image used by the Redis Helm chart launches the Redis daemon as uid 1001. During its initialisation the pod encounters permission errors while attempting to create files on a persistent volume.

The Redis pod uses a persistent volume that ultimately maps to a directory on the minikube VM that is created with permissions 0755 owned by root:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                      STORAGECLASS   REASON    AGE
pvc-8fd0125d-e04d-11e7-b721-0800271a7cc9   8Gi        RWO            Delete           Bound     default/my-release-redis   standard                 29m
$ minikube ssh "ls -l /tmp/hostpath-provisioner/"
total 4
drwxr-xr-x 2 root root 4096 Dec 13 21:35 pvc-8fd0125d-e04d-11e7-b721-0800271a7cc9
$ 

If I chmod 0777 pvc-8fd0125d-e04d-11e7-b721-0800271a7cc9 the Redis pod will startup properly.

I don't know what the best option for a fix would be - although I'm not sure this is a bug.

There has been a fair amount of debate in other issues (see kubernetes/kubernetes#2630, kubernetes/charts#976, and others) that makes me hesitant to advocate for a umask or chmod type change since I don't know what type of implications making a hostPath volume globally readable / writable by all containers would have. If its safe enough this seems like a reasonable path of least resistance.

Allowing some customisation of mountOptions in creating a persistentVolume in minikube _could_ help (i.e.: create the hostPath with an owner / group id of _n_) - at least that's what I first tried to do - but it doesn't look like mountOptions are supported by the storage-provider used by minikube yet.

The issue is that the volume provisioner isn't really responsible for the mounted volume's permissions, kubelet is. This same problem exists basically for all externalVolume provisioners that don't have a mounter implementation in core. Local volumes I think are the only supported volume type that's got a provisioner outside of core, but has a mounter implemented in core.

I don't know what the best option is, but it seems that if local volumes get better support, then perhaps minikube should switch to using the local volume provisioner instead of the hostpath-provisioner, and then that may resolve most of these issues.

No matter what, even if the hostpath provisioner can set proper permissions (777 by default or even by allowing the storageClass to specify the permissions), the user/group of the volume will always be wrong according to fsGroup, which can still break certain things that assume a particular user.

Yup, thank you @chancez. Your summary confirms what I鈥檝e gleaned from the K8s docs here.

I鈥檓 thinking to submit a PR for the Redis helm chart that would allow consumers to override the runAsUse and fsGroup settings - but that feels like a hack.

I don鈥檛 have enough experience with this sort of thing to have a feeling for the right approach to this scenario.

Sent with GitHawk

I think being able to set those values will help in many cases. I use that to make jenkins not fail on minikube when using PVCs, but i also have serverspec tests to validate jenkins comes up correctly, and currently while things _work_ my tests fail in minikube due to the owner/group on the files being root, so it's not a silver bullet.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

I found a workaround for rancher kubernetes on this same issue and I found my way here through a google search to find a solution.

In case it helps others here is the workaround I used. Create an init container a level above where you want to mount your writable directory. (I want /data/myapp/submission but I create a volume at /data/myapp, then in the command for that container create the submission directory and chown it to the users numeric userid. The account and uid do not need to exist in init container. When the main container(s) come up the directory you wish to write in will have the correct ownership, and you can use it as expected.

initContainers:

  • name: init-myapp
    image: registry.hub.docker.com/library/busybox:latest
    command: ['sh', '-c', 鈥榤kdir -p /data/myapp/submission/ && chown 1049 /data/myapp/submission/' ]
    volumeMounts:
  • name: submission
    mountPath: "/data/myapp/"

Originally I had tried chown-ing the mount itself, and not a directory below - the behavior in that instance was odd - it acted if it could write files but they silently disappeared after creation.

Observed this issue today, doesn't seem to be any work around other than init containers.

Also bumped in to this "Permission denied" error when mounting a hostPath PersistentVolume to a container which uses a non-root USER.

This isn't an issue using vanilla Docker and a named volume on my local host if I chown some_user:some_group in the Dockerfile itself which seems to persist permissions/ownership even post-mounting at runtime.

this should be reopened?

I think so @AkihiroSuda - The only workaround I found was to grant my USER sudo privileges in order to chown the mount at runtime, which pretty much negates the point of using a non-root user.

Was this page helpful?
0 / 5 - 0 ratings