I use such file structure:
project
|-environments
|-|-environment_name
|-|-|-docker-compose.yml
|-|-|-Dockerfile
It gives me (theoretical) possibility to run different environments (devel, test, stage, prod) easy.
When running docker-compose -f environments/production/docker-compose.yml up
dockerfile's context is stil the same directory where the file is placed, not the call directory.
This is a problem because it makes impossible to properly add files to container.
Adding to compose of this:
web:
build: ../../
dockerfile: environments/production/Dockerfile
Doesn't help
Really in this case Dockerfile's context should be the call directory
Paths in a docker-compose.yml
are always relative to the file (or they should be). We actually used to do relative to $PWD
, but there were many requests to change it to relative to file, so that it matches the behaviour of other tools.
Have you tried removing dockerfile:
? since paths are relative to the docker-compose.yml, I think that might work?
@dnephin no-no, this issue is about context of Dockerfile, not docker-compose.yml
Problem is that _Dockerfile_ that is used by docker-compose has context of it's current directory and not the context specified in docker-compose (build: ../../
)
So when I use COPY ./ /usr/share/nginx/html
in Dockerfile โ it copies contents of it's directory (e.g. docker-compose.yml and Dockerfile themselves), and not the build directory.
About relativity of paths to dockerfile in docker-compose.yml there is an issue too: #1890
I was wrong, the dockerfile
path is relative to the build context, not the docker-compose.yml. It's not clear to me how this issue is different from #1890. The path you specify in build:
is the build context.
So your example should work, and does work for me:
(py27)$ tree
.
โโโ environments
โโโ prod
โโโ docker-compose.yml
โโโ Dockerfile
2 directories, 2 files
(py27)$ docker-compose -f environments/prod/docker-compose.yml build
Building web...
Step 0 : FROM alpine:edge
---> 5e704a9ae9ac
Step 1 : RUN apk add -U tree
---> Using cache
---> 3c9dcc47e999
Step 2 : ADD . /code
---> Using cache
---> 2422cc102b32
Step 3 : CMD tree /code
---> Using cache
---> 966fe0ca49d4
Successfully built 966fe0ca49d4
(py27)$ docker-compose -f environments/prod/docker-compose.yml up
Starting prod_web_1...
Attaching to prod_web_1
web_1 | /code
web_1 | โโโ environments
web_1 | โโโ prod
web_1 | โโโ Dockerfile
web_1 | โโโ docker-compose.yml
web_1 |
web_1 | 2 directories, 2 files
prod_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
(py27)$
Ok, I now see that it is a cache issue.
Let's go through the entire process.
Machine was used for experiment with different configurations of file structure earlier.
Current project structure and contents:
MBP-Terion:test terion$ tree .
.
โโโ environments
โย ย โโโ production
โย ย โโโ Dockerfile
โย ย โโโ docker-compose.yml
โโโ i_am_in_root_of_project.txt
โโโ index.html
2 directories, 4 files
MBP-Terion:test terion$ cat environments/production/docker-compose.yml
web:
build: ../../
dockerfile: environments/production/Dockerfile
environment:
APP_ENV: production
APP_DEBUG: false
DB_CONNECTION:
DB_HOST:
DB_PORT:
DB_DATABASE:
DB_USERNAME:
MBP-Terion:test terion$ cat environments/production/Dockerfile
FROM nginx
RUN rm -rf /usr/share/nginx/html/
RUN mkdir /usr/share/nginx/html/
ADD ./ /usr/share/nginx/html/
RUN ls /usr/share/nginx/html/
Now, on VM:
$ tree
.
|-- environments
| `-- production
| |-- docker-compose.yml
| `-- Dockerfile
|-- i_am_in_root_of_project.txt
`-- index.html
$ docker-compose -f ./environments/production/docker-compose.yml build
Build image
Building web...
Step 0 : FROM nginx
---> 6886fb5a9b8d
Step 1 : RUN rm -rf /usr/share/nginx/html/
---> Using cache
---> af9c7fcef88c
Step 2 : RUN mkdir /usr/share/nginx/html/
---> Using cache
---> 90ea5c5adbb2
Step 3 : ADD ./ /usr/share/nginx/html/
---> ce7debfcba5f
Removing intermediate container 31dfd03743ac
Step 4 : RUN ls /usr/share/nginx/html/
---> Running in 90aad48683cd
environments
i_am_in_root_of_project.txt
index.html
---> e49a76a878b2
Removing intermediate container 90aad48683cd
Successfully built e49a76a878b2
Seems to be correct and ls
in build task shows correct list, BUT:
$ docker-compose -f ./environments/production/docker-compose.yml -p masterproduction up -d
> Recreating productionmaster_web_1...
$ docker exec productionmaster_web_1 ls /usr/share/nginx/html
> 50x.html
> Dockerfile
> docker-compose.yml
> index.html
WHAT?
Ok, rerun:
$ docker-compose -f ./environments/production/docker-compose.yml -p productionmaster stop
Stopping productionmaster_web_1... done
$ docker-compose -f ./environments/production/docker-compose.yml -p productionmaster up -d
Starting productionmaster_web_1...
$ docker exec productionmaster_web_1 ls /usr/share/nginx/html
50x.html
Dockerfile
docker-compose.yml
index.html
Ooook, killing the whole VM (thanks to digitalocean it's very fast) and redeploying from scratch, running all the process and now:
$ docker exec productionmaster_web_1 ls /usr/share/nginx/html
environments
i_am_in_root_of_project.txt
index.html
So it seems to be a caching issue.
And docker-compose has no option --no-cache
..
PS
Each deploy is made in a different directory and containers start in context of these directories
Oh, I've found that build command has --no-cache option
well, in result this was indeed caching problem: file where changing their location an context, but cache didn't take this into attention. not very widespread use case but should this work different?
I think this is the design. I'm not sure about the caching issues, but it sounds like this is working now.
My problem which I spent an inordinate amount of time on was also a caching issue. It is not apparent from the Docker documentation that when you first create a container each time you do a 'docker-compose up -d' it is using a cached container. This made it a confusing problem that was hard to debug and research when I changed my Dockerfile in the subfolder I was trying to build from.
For future reference one has to rebuild it first using
docker-compose build --no-cache
docker-compose up -d
Yes, we need to address the problem of docker-compose up only building on the first run. See #12 and #693
+1 for this, I have been spending alot of time removing containers/images, the 2 steps above will save me alot of time but if we could detect the dependencies like the mvn plugin does it would be the desired behavior.
I think the main issue is that the problem- which seems like it should be happening more than it is- is not documented anywhere. Even with all the reading I did the only reason I found the problem was trying experiments and working backwards and then stumbling upon this thread
In short more obvious documentation and perhaps the ability to do
docker-compose up -d --no-cache
that is clearly explained somewhere why this is needed.
Most helpful comment
I think the main issue is that the problem- which seems like it should be happening more than it is- is not documented anywhere. Even with all the reading I did the only reason I found the problem was trying experiments and working backwards and then stumbling upon this thread
In short more obvious documentation and perhaps the ability to do
docker-compose up -d --no-cache
that is clearly explained somewhere why this is needed.