Moby: Dockerfile support for ENVFILE ./.env?

Created on 19 Nov 2016  路  59Comments  路  Source: moby/moby

Looked around a bunch but can't find how to do this:

ENVFILE ./env

If that's not possible, is there a workaround to build a docker image that will contain all those environment variables when run?

I see that there's a way to do it when you start a container with docker-run, but I'd like to be able to do it when I build the image.

arebuilder kinfeature

Most helpful comment

All the answers that involve workarounds miss the point. We shouldn't need workarounds. We shouldn't settle on workarounds. We should demand full expressive power of key-value pairs, no limitations, no reservations, always, everywhere, no exceptions, no compromises. Passing values from point A to point B in a Dockerfile should be as natural as breathing, rather than a source of constant asphyxiation and a PROBLEM that generates VOLUMES of folk medicine on how to cure it in every last one of the gazilliard corner cases.

I found this issue because I was looking for something like ENVFILE, but for me it would itself be a workaround for the inability to share ENV and ARG variables between different stages of a multistage build. This constant RESISTANCE that Docker puts to any attempts at implementing the DRY principle in dockerfiles is infinitely frustrating. Env is env is env - it is the ONE THING that sits there all the time while MANY THINGS happen. It is SUPPOSED to be shared, that's the VERY IDEA of env.

All 59 comments

There is no cmd to do that today. Its an interesting option though. Do others have a need for this?

With the limitations of the env-file in docker (e.g. No multi-line env vars etc), why not use something like;

COPY my-env-vars /
RUN export $(cat my-env-vars | xargs) 

@thaJeztah that will only make those env vars available during the RUN cmd, they won't be persisted in the resulting image, or even in subsequence RUN cmds

@duglin ah, you're right

@duglin what about adding this instruction to Dockerfile

RUN echo "VAR123='myvar_123'" >> /etc/profile.d/myenvvars

NOTE: Good workaround even thought it doesn't work for all the images.

@ripcurld00d yea, I'm sure there are probably work-arounds based on your image, but I'm not sure why we wouldn't support an env file via build when we already support it via run. Seems like a nice level of consistency.

@duglin I agree with keeping them consistent. I once had an env variable related issue of trying to import env variables to the container via Dockerfile but couldn't use the built-in ENV in Dockerfile because I need dynamic env variables. Fortunately I was using docker-compose and it allows env variables with declaration of environment which can read env variable from the host. I would be happy to see this feature added to Dockerfile and keep this consistency with docker run. So if docker supports env-file, I might not need to depend on docker compose to pass dynamic env variables.
@thaJeztah Any thoughts on adding this feature ?
If it gets approved, can I claim this issue to work on? I've been working with Docker for a few months and love it and would like to make contributions to this great project.
Thanks!

Yeah, I am running exactly into the same issue where .env file support will be ideal for this purpose because I need to define almost 20 ENV vars and that will get the Dockerfile very ugly. So any chance for support .env in Dockerfile or BUILD command?

Ran into the same issue.
As a workaround I'm doing something like:

CMD env `cat <my_env_file> | grep -v '^\s*#'` <my_command>

(The grep being there just to ignore comments)

agree that a .env file or an ENVFILE instruction would be best.

I keep running into this limitation when building images, and would definitely appreciate a ENVFILE directive.

I feel like it would make the docker subcommands more 1:1 with docker-compose

I'd love this. It will allow people to take an image and extend it with customizable options. A great example of this is an image I use for a binary called "sipp". This binary connects to a PBX and tests it. With an ENVFILE option I can easily add the PBX's username and password to the image.

@TheSeanBrady This wouldn't magically mask the environment variables, the data would still leak into the image.
In such a case you can just use --build-arg along with ARG.

Also sounds like you need the username/password at _runtime_, not during build, in which case docker run --env-file would do the trick

Any news?

I am interested in working on this. Would anyone have any pointers on where to start and how long it would take?

@ssdong did you ever implement this? If you are still interested then you should do it!

I'd like to see input from @tonistiigi and @AkihiroSuda first; there's a lot of work being done in BuildKit at this moment (https://github.com/moby/buildkit); also, other than "convenience", interested to hear a bit more about the use-cases for this feature.

I recently encountered a use case for something like. There's no great generic way to set a large number of environment variables for all commands run in a container. For example, while one can set some environment variables in an /etc/profile or the like, this is only relevant to running login shells, not arbitrary CMDs.

In my case I script that generates a large number of environment variables that should be set for the application. I could hard-code those variables with ENV commands in the Dockerfile, but it's rather inconvenient because the output of that script could change from version to version of the application.

It would be nice if (and I'm not sure if this makes sense architecturally) I could do something in a Dockerfile like:

RUN myscript.sh > /tmp/environment
ENVFILE /tmp/environment

then the build process would actually read the environment from a file in the image and, line by line, perform the equivalent ENV commands. Having an ENVFILE command that can read a file in the image is nice because it also covers cases like

COPY .envs /tmp/.envs
ENVFILE /tmp/.envs

in that case it becomes a two step process, but it also affords quite a bit of flexibility.

Also, I realize I could achieve the equivalent effect using an ENTRYPOINT, but at least in my case that slows down any command run with that image, since the script that generates the environment is non-trivial.

Any news?

I'd like to see support for this as well, and can add a use case! I'm writing a script to convert a Singularity image to Docker, and Singularity has a basic text file with a bunch of environment settings in it. I either need to parse that manually (annoying but possible!) or I could just specify a path to the file during build.

TLDR :+1: !

Bump! this would be great to have as when dealing with tons of env variables you can support both docker and docker-compose. 馃槃

鈽癸笍

Another use case: Building an application with the application configuration done at build-time.

In my case the build configuration creates files accessible by the build but not by the application (aws-cli for example). It also passes through configurations to the .env file that are used by the application itself.

If these configurations were made available at run-time (using docker run --env-file ...) the CMD would be something along the lines of:
CMD ["python", "app/configure_application_and_then_run_application.py"]
Where the CMD here would then chain (run) the command that starts the application once it is done building configuration.

Alternatively with a build env file it is simply:
CMD ["python", "app/run_application.py"]
Since all configuration has happened before this point. Also, when running the container later only run-time variables need to be passed separating those from other environment (build-time) variables.

Finally:

docker build --build-arg CONFIG_1=1 --build-arg CONFIG_2=2 --build-arg CONFIG_3=3 .... --build-arg CONFIG_X=X

Gets quite unwieldy and tedious to manage after only a handful of variables. Production vs. Staging build commands are complex and potentially largely different (rather than simply providing different ENV files).

I feel like I could keep going on this point, but I hope this is enough in addition the other valid concerns placed (well) before mine :).

:+1:

UPDATE
Finally figured out a workaround (with help :D). Assuming you can build a .env file into your project, this will work. Otherwise you may be able to pass the .env as a string (--build-arg="`cat .env`") in the build-arg and follow similar logic.

Within your Dockerfile:
ARG BASH_ENV_SETUP="export $(egrep -v \"^#\" ${WORKING_DIRECTORY}/${ENV_FILE} | xargs)"
RUN eval ${BASH_ENV_SETUP}; echo "At this point your env variables are accessible: ${EXAMPLE_ENV_VAR};"

Now run your build like so:
docker build -t app-build-tag --build-arg ENV_FILE=path/to/app/.env .

You will have to append the eval ${BASH_ENV_SETUP} bit to every RUN command that you want to use the environment variables in, but it will work in the meantime :).

WARNING
This does not work if you have spaces in your .env file

Cheers!

You can preserve environment variables from previous build stages with this little trick:

FROM whatever as stage1
ARG ENTRYPOINT="/usr/bin/entry"
ENV STAGE1VAR="test"
RUN { echo '#!/bin/sh' && export -p && echo "exec $ENTRYPOINT "'"$@"'; } > /usr/bin/env-entrypoint \
 && chmod +x /usr/bin/env-entrypoint

FROM somethingelse
COPY --from=stage1 /usr/bin/env-entrypoint /usr/bin/
ENTRYPOINT ["/usr/bin/env-entrypoint"]
CMD ["/usr/bin/mycommand", "arg"]

Be able to specify env files at build time is a must, IMHO.
We would archieve multiple goals:

  • Set large amounts of environment variables
  • Use that env vars for LABELS.
  • Remove the dependency files to be stored in a specific build server.
    Taking advantage docker's feature of sending the build context to the build server ( -H thisserver )
    you can bundle any required file in the build through ENV vars:
    $ MYCFG=$( base64 -i stagging.json )
    $ DEPFILE1=$( base64 -i tomcat.xml )
    $ DEPFILE2=$( base64 -i whatever.yaml )
    $ docker build --build-arg CFG=$MYCFG --build-arg $DEPFILE1 apibuilder

Preparing an .env-file with whatever files/config you need then you can ask docker to run the build in any server you want, without being tied to you 'build server'.
Also, avoiding to must have a powerful build server reusing spare cpu cycles on other idle servers (or even workstations) can save you a lot of $$$ in cloud bills.

You could format your envfile like this:

#!/bin/sh

ENTRYPOINT="/usr/bin/your_entrypoint"

export \
   VAR1=value1 \
   VAR2=value2 \
   VAR3=value3

exec $ENTRYPOINT "$@"

Then near the end of your dockerfile add:
COPY ./envfile /envfile RUN chmod +x /envfile ENTRYPOINT ["/envfile"]
But you can't access the variables inside the dockerfile.

Hi @huggla
The idea was avoid workarounds and as you say you can't access vars inside Dockerfile, where I want (for example) add LABELs depending on that vars.
Also your solution introduces the requirement for /envfile of having to handle the processes reaping since it would be the PID 1

You are absolutely right, except for the process reaping part. The exec last in my script makes whatever command that is in $ENTRYPOINT PID 1.

True, exec in shell replaces process without forking.
Confused the exec in the script with dockerfile commands

No solution for this yet?

I also would like that docker had something like this!

This would be great!

I really would to be able to specify a env_file during build.

Hello,
I would also welcome this feature.

We very much need this because there is a pretty big disparity now with Docker Compose regarding this.

Also, the possible workarounds are not very nice.

Hey there,
It would pretty dope to have this for Docker builds! Looking forward to it 馃槃

I had a need for something similar for making env variables available to a cron job, and I ended up using this:

ADD .env /code
RUN cat /code/.env >> /etc/environment

All the answers that involve workarounds miss the point. We shouldn't need workarounds. We shouldn't settle on workarounds. We should demand full expressive power of key-value pairs, no limitations, no reservations, always, everywhere, no exceptions, no compromises. Passing values from point A to point B in a Dockerfile should be as natural as breathing, rather than a source of constant asphyxiation and a PROBLEM that generates VOLUMES of folk medicine on how to cure it in every last one of the gazilliard corner cases.

I found this issue because I was looking for something like ENVFILE, but for me it would itself be a workaround for the inability to share ENV and ARG variables between different stages of a multistage build. This constant RESISTANCE that Docker puts to any attempts at implementing the DRY principle in dockerfiles is infinitely frustrating. Env is env is env - it is the ONE THING that sits there all the time while MANY THINGS happen. It is SUPPOSED to be shared, that's the VERY IDEA of env.

Any updates?

How has this been open and untouched for 2.5 years?

@zulrang Apparently no one has thought it important enough to take the time to design and implement it.

As a workaround you can use a docker-compose.yml file to wrap running the container, something like:

version: '2'
services:
  postgres:
    image: postgres:10
    environment:
      - POSTGRES_USER=postgres
    env_file:
      - .env

In the above, we define both environment variables that aren't secrets under environment (the docker-compose might be shared on GitHub, for example) along with secrets in .env. Then you'd start:

$ docker-compose up -d

The issue has been open for years because it's obviously not important enough to warrant anyone looking at. What choice do we have then? We can keep posting here, implement a pull request to implement the suggested change (and hope someone will look at it) or just come up with another method to achieve similar. Not great options, but what else can you do?

It could be experimented as a custom Dockerfile frontend for BuildKit

any update?

Now that multi-stage builds are a prevalent pattern, it is really important to have configuration options when running the first stages.

If someone were to create a PR for this, would the Docker/Moby team be interested in adding this functionality?

Why isn't this higher in priority?

any update ?

any new ?

any update?

For now, I got rid of my stages and went to a single bloated image.

If anyone is interested in a read only container (or uses HPC) both --env and --env-file are being added to Singularity - you can review the PR here. So - if you have a Dockerfile that builds a scientific container and then you put it on Docker Hub, you could pull into a Singularity container and then use with the external file.

Is a PR for this welcome?

Should be experimented as a custom LLB frontend first.

@AkihiroSuda Sure. Is there any specific guideline I should follow?

Seems to me that the BuildKit documentation suffices.

@tonistiigi : Does this proposal SGTY?

I ran into the need for this

still nothing after almost 4 years?

_If_ this would be implemented, we need to look carefully at the design. The --env-file option on docker run has many limitations (which should be looked at if we want to inherit the "bad parts"), e.g.

  • current implementation has no support for multi-line values
  • current implementation does not support export lines. While this was by design, some users indicated that it limits re-use of the same file for multiple purposes
  • should env-vars be expanded based on the environment on the host (like on docker run), or expanded based on the "build" container/environment?

For example;

cat envfile
SOME_VAR
HELLO=world
FOO=$HELLO
EMPTY_VALUE=
NO_SUCH_ENV_IN_ENVIRONMENT

export SOME_VAR="I am some var"

docker run --rm --env-file=./envfile busybox env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=aa0482d94952
SOME_VAR=I am some var
HELLO=world
FOO=$HELLO
EMPTY_VALUE=
HOME=/root

Having said the above, I think implementing --env-file would only fixes _one_ option that can be tedious to type. There are now many options available for docker build (build args, network, dns-settings, labels, quota, security options, ...), and I think that a docker-compose.yml or docker buildx bake with a bake-file to be better options to look into.

There's still improvements that can be made in that area, such as native support for docker-compose on docker build (currently already supported by docker buildx), and the compose-specification may need improvements to make build more of a first-class citizen (something to be discussed in https://github.com/compose-spec/compose-spec)

Was this page helpful?
0 / 5 - 0 ratings