As new version of docker supports -f option (specifying another than Dockerfile filename for configuration https://github.com/docker/docker/pull/9707) it's reasonable to add this support in fig too. For example:
build: .
buildfile: mydocker.file
Would be nice! :+1:
:+1:
We have an app that uses Dockerfile for production deployment, so I cannot use Fig for development, since it's impossible to point to another build file.
I think it would be nicer to just keep this as build
build: ./mydocker.file
I suspect once that support is added to docker-py
, fig should just work.
@seven1m you should aim for a way to make the image work in any environment. You can probably extract any differences to be data volumes, or environment settings.
@dnephin yeah, I think that it would be correctly too.
Thanks for the feedback. I'll investigate that some more, but seems easier said than done.
@dnephin But there is the build file and then the build context.
For instance: docker build -f path/to/Dockerfile -t my_image .
, where .
is saying to use the current directory as the build context, and "-f" is pointing to a subpath of .
. If only the Dockerfile is specified, then there would be no context uploaded, just the Dockerfile.
Oh, I didn't realize they were also being decoupled. I thought the same rules applied but you could name the file within the root of the context. In that case another field works. Thanks @cpuguy83
Yes, they are decoupled, with the restriction that the Dockerfile must be within the build context;
docker build -f $(pwd)/sub/sub/Dockerfile $(pwd)/sub/
Is allowed, but
docker build -f $(pwd)/sub/Dockerfile $(pwd)/sub/sub/
Is not, because the Dockerfile is outside of /sub/sub/
(Hope I wrote the examples correctly, the feature can be quite confusing LOL)
Any chance we can make it so the Dockerfile does _not_ have to be inside the build context? (though... I wonder if that would require a change to Docker rather than fig).
@coding2012 pretty sure that you tar up the dir to send to docker to build therefore it cant be out of the build context. This is for security and I know has been discussed at great length on the docker end.
+1
This would be really useful for managing multiple images!
+1
I believe we'll need docker-py
support first https://github.com/docker/docker-py/issues/497
:+1:
Do want. +1
Yep, this'd be great.
Annoyingly, we have to specify the API version when talking to a Docker daemon, and the ability to specify a Dockerfile was added in API version 1.17 (Docker 1.5), so we have to decide what to do:
docker-compose.yml
uses a feature that requires it (such as specifying the Dockerfile to use when building). Scary.may be to avoid putting compatibility workarounds it's possible to stick to the first option and just offer (in readme or somewhere else) to use older versions of docker-compose in depend on which version of docker is used?
Along the same lines as 1, some more ideas:
DOCKER_API_VERSION
. If we were to default to latest, anyone using an older version can just set the environment variable to their version. This is maybe a little inconvenient, but it beats not being able to use the latest compose version at all.docker
is available in $PATH
, DOCKER_API_VERSION
could default to the equivalent of $(docker version | grep 'Client API version' | cut -d: -f2)
. This assumption would be incorrect for any remote host, but at least it would be consistent with the docker client.We could do 3 if we were willing to cache the version in a local file.
the equivalent of $(docker version | grep 'Client API version' | cut -d: -f2)
There are plans to change the output format of docker version
, so this might break because of that, obtaining it via the API itself would probably be possible though?
The idea of an environment variable sounds OK, actually (it should be COMPOSE_
not DOCKER_
though). If someone specifies an older version and tries to pass a buildfile, docker-py will error out with a helpful message (thanks, docker-py!) so it should be fine.
+1
+1
+1
This would simplify development workflows that target Elastic Beanstalk. AWS requires you to rename Dockerfile
to Dockerfile.local
when using their Preconfigured Docker Containers. It's annoying that Compose can't deal with this.
It's precisely when you compose multiple containers that you need to be able to specify decoupled Dockerfiles and build contexts for each container !
+1 for the same flexibility in docker-compose
as docker
.
@aanand is this only awaiting a PR or is there more discussion that needs to be had?
@jakehow there's already a PR: https://github.com/docker/compose/pull/1075
I've been using a slightly modified version for a while now and it works quite well.
Looks like #1075 was merged and this can be closed now. cc @dnephin
Testing this from master
now, but one thing immediately springs to mind: if we can now pass a Dockerfile, it would be nice to be able to pass in distinct .dockerignore
files as well.
Use case: I have a Django/Rails/etc app. The backend generates a bunch of static files (images, CSS), which I want to post-process and then serve with nginx. My Dockerfile.static
doesn't need the full application, only the static files. Conversely, Dockerfile.app
needs everything _but_ the generated static files.
I can open a new issue if that's preferred.
@pikeas for that to work, docker itself should support that first. There's an existing proposal for that here; https://github.com/docker/docker/issues/12886
+1 i need that function
+1
+500000000000000
:+1:
:+1:
:+1:
This feature has been implemented in Compose since 1.3.0:
For people coming across this now when using version 2 of the docker-compose file, it now exists in the build option
myService:
build:
context: .
dockerfile: Dockerfile.prod
Most helpful comment
For people coming across this now when using version 2 of the docker-compose file, it now exists in the build option