Moby: Volumes files have root owner when running docker with non-root user.

Created on 8 Dec 2013  路  38Comments  路  Source: moby/moby

_As a_ non-programmer with a user that is in the docker usergroup
_I want to_ be able to access files created by docker in a volume A that I specified using docker run -v A:B without taking further steps
_So that_ I will not get unexpected behavior


_Context_
In my use-case, I am creating a docker image to help a non-programmer collaborating on a web development project. I want to supply the image as executable that serves the web app, without the non-programmer having to install lots of stuff. The volume is the directory containing the web project - on the host, so I don't want root:root files to appear there.

I may also create a container with git and some scripts doing the only stuff that the collaborator should have to do.

Using docker in this way is new for me, but I think it is a great use case!


This issue is related to https://github.com/dotcloud/docker/issues/2372. However, I think this use-case is much more specific and might have higher priority.

Most helpful comment

I currently experience that files created by the container in a mounted volume are owned by root on the host. I want this to be the same user:group as the user:group that owns the directory. Is this possible?

All 38 comments

mmm, me too - there is an issue somewhere discussing somethign related.

If you can find the issue, please link it (I couldn't).

yup - sorry, had too long a list of things to track.

generically, #2975 and #2360

darn, I can't find it atm either - I'll continue looking later.

ping @cpuguy83

Volumes will now inherit permissions of the files in the image, unless they are bind mounted, for example(docker run -v /path/on/host:/path/in/container), and that is expected behavior.

Based on the linked issues, I believe this issue is resolved, so I am closing.
If not, pleas ping here. Thanks!

I currently experience that files created by the container in a mounted volume are owned by root on the host. I want this to be the same user:group as the user:group that owns the directory. Is this possible?

@JWGmeligMeyling files and folders created in the volume will have the same uid:gid (numeric) as the user creating them in the container. If you add a user inside the container having the same uid:gid as outside the container and run your contsiner as that user, that should be possible

Thanks for the response, I will try that!

@thaJeztah That solution is not really satisfying as it breaks portability of the container.

If you are mounting files/dirs from the host, this is by definition non-portable.

Well, with docker-compose and the current path it is :wink:

Ok, It's probably something that should be done in docker-compose if it isn't already.

With "non" portable, @cpuguy83 means that you cannot start the container on a "random" host, without first creating the files and folders it needs for the bind-mount. (e.g., you cannot reschedule such a container to a different host in a Swarm cluster)

So this issue kind of stagnated. I only plan on using Docker for local development, currently. That said, I plan on cloning down the git repo, running docker-compose up and having a development environment. Cool beans.

However, my web container does a gulp build resulting in all of my assets being owned by root and not being accessible. There should be a straight-forward way around this.

@chadfurman Not sure I follow.
You are running gulp build as the root user and as such the files are owned by root?

@cpuguy83 I was running "gulp build" inside my container. As such, all files it built were owned by root because my container's default user was "root". There should be an easy way of making the container user the same user as the person who ran, for example, docker-compose up

I ended up running gulp build locally outside of the docker container and sharing the resulting dist/ files with the container

@chadfurman Something like this might work if you are working on Linux and docker is on the same machine. But otherwise it would just not be possible.
You can specify the user you want the container to start with manually.

Docker4Mac does uid/gid translation at the filesystem layer when mounting from the Mac into the container. This is outside of the core of docker, though.

@cpuguy83 lots of developers use Linux and docker on the same machine.

I'm guessing you're talking about https://docs.docker.com/engine/reference/builder/#/user which needs to be built into the image?

Seems like a run-time "run as this user" setting would be helpful. Though, I can respect that risk-value proposition is not horribly enticing.

@chadfurman docker run supports --user, and I believe compose supports the same option in the yaml format.

Somehow wound up on the wrong issue when adding milestone, labels...

Thanks @cpuguy83, managed to work around that using --user.

For local dev workflow (build system in the container):

docker run --user `id -u` -v `pwd`:/sharedVolume myProject

Hope this helps

this is not working at all! Intermediate folders still have root user:

docker run --rm --user www-data -v $(pwd):/toto/kaka debian:8 ls -alh /toto

Gives

drwxr-xr-x  3 root     root     4.0K Jan 24 20:23 .
drwxr-xr-x  1 root     root     4.0K Jan 24 20:23 ..
drwxr-xr-x 39 www-data www-data 2.1K Jan 24 20:23 kaka

whereas /toto should belong to www-data user as well.

@ebuildy Why would /toto be owned by www-data in this case?
If you want it to be owned as such you should create it before hand.

I don't understand why and how a folder can be "root" since I specify an user in order to prevent that!

Also, this could even be a security issue isnit?

I found this bug/strange behavior, because, I always run container like this:

docker run --rm --user tom -v $(pwd):/home/tom/site/www ...

And sometime, my Php script wants to write in ~/tom/site but cannot, because root. The workaround of creating /home/tom/site when building the image is fine, but I don't build all images I use. (we are a company with different "layers"... sys-admin builds, dev runs ^^).

--user is for setting the uid of the running process.
docker4mac happens to be able to give custom ownership to directories that are being bind-mounted into the container.

This actually doesn't happen at all on a straight-up Linux install.
If the path does not exist in the container it is created with root ownership.

Also no, it's not a security issue to have less privileges to files, unless I'm misunderstanding what you are referring to?

In my use case I use

docker run -u `id -u`

so that the uid in the container is the same as the uid outside of the container. I do this so that the files created in the volume will always have the same owner. In this use case I can't pre-make the mount point with the correct permissions because I don't know them ahead of time. The only options I can think of are ugly or insecure (only write to children of the volume, 777 the mount point, run container as root and fix all the things then change to uid I want to run as).

Does it require sudo on Linux? (Ubuntu 16.04)

I'm trying this:

$ docker run --rm --name nginx-html \
    --user `id -u` -p 8888:80 \
    -v `pwd`/src:/usr/share/nginx/html nginx:alpine

and getting this error:

$ docker run --rm --name nginx-html --user `id -u` -p 8888:80 -v `pwd`/src:/usr/share/nginx/html nginx:alpine
2017/07/07 18:36:32 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2017/07/07 18:36:32 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)

@j127 most likely due to the way that image was designed to work; the nginx image already runs as a custom user, so various directories are created with permissions special to that user (i.e., see https://github.com/nginxinc/docker-nginx/blob/0c1abdff5cc77a9545a9cbeebf026c5cc8d7fc77/mainline/alpine/Dockerfile#L53-L54).

If you look at the file permissions, you'll see that /var/cache/nginx/ is owned by the nginx user (uid 2), and other users don't have permissions;

$ docker run --rm nginx:alpine ls -la /var/cache/nginx/
total 8
drwxr-sr-x    2 nginx    nginx         4096 Apr  6 16:28 .
drwxr-xr-x    1 root     root          4096 Apr  6 16:28 ..

Instead of running the container as your local user, you can also change permissions on the files you're bind-mounting so that the nginx process in the container has read access

Instead of running the container as your local user, you can also change permissions on the files you're bind-mounting so that the nginx process in the container has read access

Thanks, my problem is dockerizing a few applications where users upload files directly to the app's filesystem. If something happens to the container, I don't want to lose those files. I've been watching a lot of Docker videos and reading books, but I haven't been able to find a workflow for that yet. If I can't get in as user 1000 (my user ID on the host) then maybe I just need to chown -R 1000 ./app when I backup the site from the container? It seems convoluted...

Thanks, my problem is dockerizing a few applications where users upload files directly to the app's filesystem.

Running the container as a different user won't solve that; how are those files uploaded? If that's a different container, it should be the responsibility of that container to write with the right permissions so that nginx is able to read those files.

If something happens to the container, I don't want to lose those files.

You can use a named volume instead of a bind-mounted directory; bind-mounts are to give a container to a path on your host, volumes are to persist data originating from a container. Minor difference in behaviour and intent.

However, let's not have that discussion here, as the GitHub issue tracker is not really intended for that; feel free to ping me on the Docker Community slack if you have questions

I had this problem too. Here's an example:

  volumes:
    .:/src
    bundler:/src/.bundle

Here's how I tried to fix it:

mkdir ./.bundle
docker-compose up

That didn't work. I still get "There was an error while trying to write to"
Why? Somehow mounting a volume changes the owner of that mount point.

I had to change to this:

  volumes:
    .:/src
    ./.bundle:/src/.bundle

Its almost never a good idea to run your container as root.
The following will work for a lot applications

RUN addgroup {your group args} |
  && adduser {your user args}

USER {your-user}

RUN mkdir {directory you want to mount}

VOLUME ["{directory you want to mount}"]

Then you should be able to mount a host directory or a docker volume that you can write to.

Here's a simple alpine example that demonstrates the ability to write to a host directory from a non-root user...

FROM alpine

RUN addgroup -S app \
    && adduser -S -G app -h /home/app -D app

USER app
RUN mkdir /home/app/mount-data
VOLUME ["/home/app/mount-data"]
WORKDIR /home/app
$ docker build --rm -t mount-test .
$ mkdir mount-data
$ ls
-rw-r--r--@ 1 me  staff  171 Jan 18 10:19 Dockerfile
drwxr-xr-x  2 me  staff   64 Jan 18 10:19 mount-data/
$ docker run --rm -ti -v $PWD/mount-data:/home/app/mount-data mount-test /bin/sh
~ $ ls -l
total 0
drwxr-xr-x    2 app      app             64 Jan 18 15:19 mount-data
~ $ echo world > mount-data/hello
~ $ exit
$ cat mount-data/hello
world
$

my problem has another darker side, when I run docker with -u option, I can not install dependencies with yarn since yarn is installed as root user in most production ready images on docker hub.
so I should run docker as root to be able to use yarn but I should run docker commands as non-root user to be able to delete theme from outside of docker container(inside the volume) this is a serious problem.
my only solution is to chmod from inside docker container for each file and folder created by docker user and this is a real nightmare.

since yarn is installed as root user in most production ready images on docker hub.

If these are official images, I suggest opening a ticket

Have you tried echoing $HOME. I think it's not being set automatically with -u.

so I should run docker as root to be able to use yarn but I should run docker commands as non-root user

When you say "docker" do you actually mean "container"? When you say "docker commands" do you mean "commands inside container?"

@iamsoorena - if you are indeed trying to perform a global install via yarn on a running container, you're going to have the same issue you'd have on any server where you aren't root (you launched as a different user). You'll need to install non-global packages into a hierarchy where your user has permissions. This is something that I do frequently when I use a container for development against my local OS X disk.

If what you really want to do, is to have an container image that has some yarn packages installed, I'd recommend that your just extend the image with your own additions. If the package you're using has already set the USER to something other than root, your can reset the user again, install your stuff and set it back to root.

@bdurrow but the user doesnt exist inside the container, this can be problematic for some application that lookup user from UID. (Apache Spark / Hadoop in my case).

As a workaround now, we share a dynamic /etc/passwd file :/

Why Docker is not changing the /etc/passwd file as it do for /etc/hosts file?

@leopoldodonnell

docker run --rm -ti -v $PWD/mount-data:/home/app/mount-data mount-test /bin/sh

Doesn't work:

test $ cat .\Dockerfile
FROM alpine

RUN addgroup -S app \
    && adduser -S -G app -h /home/app -D app

USER app
RUN mkdir /home/app/mount-data
VOLUME ["/home/app/mount-data"]
WORKDIR /home/app
test $ docker build --rm -t mount-test .
[+] Building 1.1s (8/8) FINISHED
 => [internal] load build definition from Dockerfile                                                    0.0s
 => => transferring dockerfile: 32B                                                                     0.0s
 => [internal] load .dockerignore                                                                       0.1s
 => => transferring context: 2B                                                                         0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                        1.0s
 => [1/4] FROM docker.io/library/alpine@sha256:185518070891758909c9f839cf4ca393ee977ac378609f700f60a77  0.0s
 => CACHED [2/4] RUN addgroup -S app     && adduser -S -G app -h /home/app -D app                       0.0s
 => CACHED [3/4] RUN mkdir /home/app/mount-data                                                         0.0s
 => CACHED [4/4] WORKDIR /home/app                                                                      0.0s
 => exporting to image                                                                                  0.0s
 => => exporting layers                                                                                 0.0s
 => => writing image sha256:6db84b124bcedadbb647da8dc9b17b99271e66c1dbe22e9f97418563c9ef2e24            0.0s
 => => naming to docker.io/library/mount-test                                                           0.0s
test $ mkdir mount-data   


    Directory: C:\Users\zenobius\Desktop\test


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----         10/7/2020   7:39 PM                mount-data


test $ docker run --rm -ti -v $PWD/mount-data:/home/app/mount-data mount-test /bin/sh
~ $ ls -l
total 0
drwxrwxrwx    1 root     root           512 Oct  7 09:09 mount-data
~ $ :(
Was this page helpful?
0 / 5 - 0 ratings