Origin: chown operation not permitted

Created on 3 Nov 2017  路  16Comments  路  Source: openshift/origin

Hi,

I am trying to deploy oficial mongodb image from docker hub through openshift container platform. When I mount the "/data/db" to persistent volume claim I got an error like
"chown: changing ownership of '/data/db': Operation not permitted".

I tried to give anyuid policy to service account.However it doesn't work.

my openshift version is 3.5.5.26 and kubernetes is 1.5.2+

I also tried to use bitnami-docker-mongodb image and when I tried to persist data I faced with similar error.

componenstorage kinquestion lifecyclrotten prioritP2

All 16 comments

I already have created pv. I don't get what you mean.

oadm policy add-scc-to-user anyuid -z default
which means allow the containers in this project to run as root.

I tried to give anyuid policy to service account.However it doesn't work.

I did that as I mentioned. It doesn't work.

@akd31 In order to help, could I ask you to post the output of the command oc get pod <pod> -o yaml here?

Also, have you tried https://github.com/sclorg/mongodb-container ? This is 100% OpenShift-compatible MongoDB docker image.

Try following -

oc adm policy add-scc-to-user anyuid $THE_USER
oc adm policy add-scc-to-user privileged $THE_USER

Though the certified image i.e. part of https://github.com/sclorg/mongodb-container shouldn't require root access as the user is being changed as part of the Dockerfile.

I can't share the yaml file since I am working offline and it has kinda long output.

I fixed my issue by extending official mongo image by adding changing user to mongodb at Dockerfile. Thanks for your attention btw.

@akd31 OK, Here are some advice how to debug it:

1) first check under what SCC the pod was admitted: execute oc get pod <pod> -o yaml | grep scc. I suspect that it will be restricted

2) how do you create a MongoDB pod? Is it plain Pod or you're using StatefulSets/something else? This question is important because depending on the answer, you need to grant access to a different accounts (to a user in case of a pod and to service account if it's not a plain pod).

3) Does a pod specification has serviceAccountName? This question is important because depending on the answer, you need to grant access to a different accounts (to the specified or to the default if there is no service account).

4) Do you have privileged: true/runAsUser: 0 in pod's security context or container's security context? This question is important because you if pod doesn't request privileged mode/root user, it could always be admitted by the most strict SCC.

  1. It is openshift.io/scc: anyuid
  2. I am usign openshift container platform gui to deploy image and edit deployment config via YAML. I don't create pod manually.
  3. Yes it does have both serviceAccount and serviceAccountName.
  4. I don't know it how I could check this.

It is openshift.io/scc: anyuid

It means that pod was admitted correctly. As far I can see, in the official mongodb image, user is created in the Dockerfile and its UID doesn't match with the owner of the /data/db directory. The UIDs should match or the directory should have write access for the root group.

As I mentioned when I change user to mongodb at Dockerfile it works, otherwise it checks the uid and if it is not mongodb (which is root by default) tries to chown and it causes our situation.

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Was this page helpful?
0 / 5 - 0 ratings