When building containers for my project I noticed that, even if I build from the same Dockerfile, fig
and docker
build different image IDs in the end.
I also tried to use the same tag, but the images still were different.
Built with fig:
IMAGE CREATED CREATED BY SIZE
c9d754ad4f35 5 minutes ago /bin/sh -c #(nop) ADD dir:ce7a82aadebe53a9cff 16.92 MB
7601cc84a236 9 minutes ago /bin/sh -c /usr/local/bin/composer install -- 34.13 MB
712b4a974fed 10 minutes ago /bin/sh -c #(nop) ADD file:bbb37f3a1cb51d4326 92.3 kB
87d612d4111e 10 minutes ago /bin/sh -c #(nop) ADD file:afe6fb0fc00bfc90cb 2.734 kB
84d2d9e64dc3 10 minutes ago /bin/sh -c chmod 0400 /root/.ssh/id_rsa-deplo 1.808 kB
16e392fd78c7 10 minutes ago /bin/sh -c #(nop) ADD file:45af06b02553fb4264 81 B
48f0aa118a70 10 minutes ago /bin/sh -c #(nop) ADD file:07f9f2dc4a310c8a29 129 B
ec26eedffc59 10 minutes ago /bin/sh -c #(nop) ADD file:f140ceae508309fece 1.679 kB
--- same till here, layers above from Dockerfile in project ---
1f260e5c4f2f 11 days ago /bin/sh -c sed -i.bak 's/user = www-data/user 44.63 kB
5ec1e62830a7 11 days ago /bin/sh -c #(nop) ENV YII_DEBUG=true 0 B
e0a170b5adbc 11 days ago /bin/sh -c #(nop) ENV YII_ENV=dev 0 B
[...]
Built with docker:
a25b0e21a23b About a minute ago /bin/sh -c #(nop) ADD dir:48b6469d85aa5dbac5d 16.92 MB
6683ef055846 35 minutes ago /bin/sh -c /usr/local/bin/composer install -- 34.13 MB
47a678036ce5 36 minutes ago /bin/sh -c #(nop) ADD file:cd1a4d0926e8ab11d5 92.3 kB
3b6cd9a1b4e3 36 minutes ago /bin/sh -c #(nop) ADD file:21c0f8485a3f156686 2.734 kB
b59ee084978d 11 days ago /bin/sh -c chmod 0400 /root/.ssh/id_rsa-deplo 1.808 kB
12557dc808c9 11 days ago /bin/sh -c #(nop) ADD file:9654f92cebea52cf16 81 B
d5e3a48a5168 11 days ago /bin/sh -c #(nop) ADD file:1bf9bcdfdde8c6ee82 129 B
8fecc44750ee 11 days ago /bin/sh -c #(nop) ADD file:49570adcc24714e845 1.679 kB
--- same till here, layers above from Dockerfile in project ---
1f260e5c4f2f 11 days ago /bin/sh -c sed -i.bak 's/user = www-data/user 44.63 kB
5ec1e62830a7 11 days ago /bin/sh -c #(nop) ENV YII_DEBUG=true 0 B
e0a170b5adbc 11 days ago /bin/sh -c #(nop) ENV YII_ENV=dev 0 B
[...]
ec26eedffc59
and 8fecc44750ee
is the same file, but added with different IDs.
Is there anything I've overlooked or is this a bug?
Using fig 1.0.1 docker 1.3.2.
I suspect this is the same issue as #651
You could try with master, or the latest docker-compose==1.1.0-rc1
Still an issue with docker-compose
...
$ docker-compose build api
Building api...
---> 2eda223cc599
Step 1 : ENV APP_NAME API
---> Using cache
---> 25b9711fc6cc
Step 2 : RUN apt-get update && apt-get install -y mysql-client-5.5 php-apc && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 5da6573d00eb
Step 3 : ADD ./composer.lock /app/composer.lock
---> Using cache
---> 6a226039fba3
Step 4 : ADD ./composer.json /app/composer.json
---> Using cache
---> 6646ab99d704
Step 5 : RUN /usr/local/bin/composer install --prefer-dist
---> Using cache
---> 10d806d11372
Step 6 : ADD . /app
---> Using cache
---> cf6e8311cd6c
Successfully built cf6e8311cd6c
Weird ... same until 5da6573d00eb
?!
$ docker build API/
Sending build context to Docker daemon 17.46 MB
Sending build context to Docker daemon
Step 0 : FROM phundament/app:development
---> 2eda223cc599
Step 1 : ENV APP_NAME API
---> Using cache
---> 25b9711fc6cc
Step 2 : RUN apt-get update && apt-get install -y mysql-client-5.5 php-apc && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 5da6573d00eb
Step 3 : ADD ./composer.lock /app/composer.lock
---> Using cache
---> 08079ce2a068
Step 4 : ADD ./composer.json /app/composer.json
---> Using cache
---> 492dc4c03150
Step 5 : RUN /usr/local/bin/composer install --prefer-dist
---> Using cache
---> a9df431b1a1d
Step 6 : ADD . /app
---> Using cache
---> 475bfb4cd2af
Successfully built 475bfb4cd2af
Could it make a difference that compose is using a different version of the API then the docker client?
Do you make use of a .dockerignore
file?
Do you make use of a .dockerignore file?
Yes.
How can I check the version of both?
Docker version 1.3.2, build 39fa2fa
Boot2Docker-cli version: v1.3.2
docker-compose 1.1.0-rc1
btw: Testing is based on these docs...
http://phundament.com/docs/51-fig.md
http://phundament.com/docs/51-docker.md
This is the .dockerignore:
https://github.com/phundament/app/blob/master/.dockerignore
How can I check the version of both?
Compose uses the same _docker_ version, but pins the API version to an older version (e.g. /API/v14), to stay compatible with older versions of Docker. Sometimes that could result in slightly different behavior than using the docker client directly, which will use the latest version of the API.
Thanks for the extra info!
I also experiment the same thing. fig build vs docker build yields to different images id. The issue seems to be with COPY. While copying the same file, the file checksum is not the same, yielding in different image ID from that step on:
Fig:
...
ebbad877334f 19 minutes ago /bin/sh -c #(nop) COPY file:e7d77d5cda6e8675a 118 B
11fcad719d2b 19 minutes ago /bin/sh -c #(nop) COPY file:99d496eda601574a0 207 B
2b7fff05cbd0 47 hours ago /bin/sh -c DIR=$(mktemp -d) && cd ${DIR} && 5.792 MB
c2498f2f20bd 47 hours ago /bin/sh -c DIR=$(mktemp -d) && cd ${DIR} && 14.01 MB
6ecb5424a834 47 hours ago /bin/sh -c yum install -y postgresql$POSTGRES 32.91 MB
...
Docker:
...
8f15b1592500 41 seconds ago /bin/sh -c #(nop) COPY file:d987c6f1382dcd4c9 118 B
6e59caf9fd95 51 seconds ago /bin/sh -c #(nop) COPY file:207e82b2fbc48d649 207 B
2b7fff05cbd0 2 days ago /bin/sh -c DIR=$(mktemp -d) && cd ${DIR} && 5.792 MB
c2498f2f20bd 2 days ago /bin/sh -c DIR=$(mktemp -d) && cd ${DIR} && 14.01 MB
6ecb5424a834 2 days ago /bin/sh -c yum install -y postgresql$POSTGRES 32.91 MB
...
Docker version 1.3.2, build 39fa2fa/1.3.2
fig 1.0.1
Also checked what the difference is;
an empty file, a Dockerfile and a fig.yml;
touch testfile
cat << EOF > Dockerfile
FROM scratch
ADD ./testfile /a-file
COPY ./testfile /b-file
EOF
cat << EOF > fig.yml
app:
build: .
EOF
Build with Fig and check the docker logs;
$ fig build
INFO[524009] POST /v1.12/build?q=False&rm=True&t=figtest_app&nocache=False
INFO[524009] +job build()
INFO[524009] -job build() = OK (0)
And with docker-1.3.3 (just the client, the daemon is 1.5.0-rc4)
./docker-1.3.3 build -t footest_app .
INFO[525891] POST /v1.15/build?rm=1&t=footest_app
INFO[525891] +job build()
INFO[525893] -job build() = OK (0)
So differences are;
q=False
and nocache=False
True/False
vs 1/0
by docker (shouldn't make a difference)Not sure if these would make a difference? Are the files sent different by Fig?
I believe the tarball of the files is created by the client docker-py
vs the docker cli. The implementation may do something differently, which causes the hash to be different. I thought this was fixed in docker-py
0.6, but maybe not.
Different file ordering in the tarball maybe?
I tested this using Fig 1.0.1, not yet with Compose, so it might have been fixed.
My example above should almost be copy-pasta-ble, so easy to check
@aanand; but isn't the hash calculated per-file? Does order in the tar matter?
@thaJeztah correct, my bad.
I've investigated your example a little, using fig checked out at the 1.0.1
tag with service.py
patched to set rm=False
in build()
.
My process:
$ docker rm -f `docker ps -qa`; docker rmi `docker images -q`
EITHER: $ fig build
OR: $ docker build --rm=false .
to see file checksum:
$ docker ps -a --no-trunc
to see image id:
$ docker images
Two things I've noticed:
fig build
and docker build --rm=false .
produce different, but reliable, file checksums:
fig: d5e6af1506176dc48677bf682984366aa60583dceb082d1961f3c9b44fe59ec4
docker: 0d716611907d9f1e34a35eff615b94f218575436ecda12b0d46cd2b9052dd115
@aanand hm, you may be right wrt the image-ids, I'll need to check. However, what's strange is that (not behind my computer now) they didn't use the same cache.
Ie; building twice with Fig, the second _Fig_ build would use the cache, but doing a build with Docker after that _didn't_ use the cache.
At this moment, I see it merely an "inconvenience", but would this become a problem if image signing/verification is taken into account? Would this lead to a different result?
In the end, the same Dockerfile + build-context should lead to _exactly_ the same image, regardless if Fig (Compose) or Docker was used to build that image. If not, that may be a problem.
I'm not entirely sure I'm on the right path here wrt the signing process, so maybe @dmcgowan could shed some light. I'll have a look tomorrow if I can come up with some test scenario as well (Perhaps I'm overlooking something, just intrigued what would lead to the difference)
@thaJeztah shouldn't have any effect on signing/verification. Right now the signatures occur on push and verification on pull and the content is frozen in between. The sign/verify content and new registry is still considered beta, but I don't see any potential problems there. Also worth noting that since these IDs are not content hashes, they will never generate the same ID unless its using cache even when the content is exactly the same.
Thanks for kicking in, Derek. No reason to worry then! (and, yes, I'm fully aware signing is still a tech-preview).
Still curious why Fig and Docker don't share each other's layer-cache in this case. Possibly because of a different TAR implementation? Will do some experimenting tomorrow to satisfy my curiosity :)
Good to know - so hopefully all it's going to result in is slower builds in an edge case - but it's still bizarre and concerning that a file (of zero length!) is hashing differently between fig and docker.
ah! I'm glad I found this thread.
I'm seeing this, as we are using docker build
to produce deployment artifacts, and docker-compose
to run the tests using those artifacts (it makes managing multiple services easier and more consistent between dev and test envs). However, when it got to the docker-compose build phase, the cache busted the moment it hit a simple COPY|ADD
statement on the project Dockerfile for a file that never changes. (The docker-compose.yml file uses build: .
, but I was still expecting it to use the same image created by docker build
)
This slows the test process by many minutes, but it also produced different artifacts... so what was being tested was not the same as what was being deployed. as @thaJeztah noted, this is more of an inconvenience, but one that took awhile to track down and then explain to folks that yes they are different but they are also the same so everything is ok...
and another, related oddity, is if I take the image from docker-compose
, run docker save $(docker_compose_image_id) > base.tar
and then run docker load -i base.tar; docker-compose build app
... the cache is busted again at the COPY command! (this add build; save; load; build flow is common on circle-ci to work around the fact that they can't yet cache images/containers between tests)
at the end of the day, I agree with @thaJeztah ; docker-compose build
and docker build
should produce the exact same artifacts on the same unchanged directory.
docker-py==1.6.0 I still affected by this inconsistency
I think this is "expected", see https://github.com/docker/compose/issues/883#issuecomment-73323371.
@dnephin I get the point on IDs not being the content hash, but why do we fail to get cache reuse when docker-py is just passing tar stream? See: https://github.com/docker/docker/issues/18611#issuecomment-164829443
Tar implementations can vary, which may be what is causing cache misses when different clients are used.
Docker 1.10 is supposed to be introducing content addressable layers, which may resolve this issue.
docker will be using content addressable ids in 1.10 which should resolve this issue.
This is still an issue. I'm running:
Docker version 1.11.0, build 4dc5990
docker-compose version 1.7.0, build 0d7bf73
I have replicated this with a single copy call of an empty file here:
https://gist.github.com/akatz/d5f4d40a66e6da4477dff6447273c218
Looks like the changes in 1.10 did not fix this.
I have the same problem as well.
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
Isn't it the same as https://github.com/docker/compose/issues/3148?
one could say that #3148 is a dupe of this :p
Hi guys!
Docker version 18.01.0-ce, build 03596f51b1
docker-compose version 1.18.0, build unknown
From ArchLinux official repositories.
Still the exact same issue. Any updates on this?
Underlying issue is docker/docker-py#998 (PR docker/docker-py#1582) ; resolution is currently blocked by the changes in moby/moby#33935
resolution is currently blocked by the changes in moby/moby#33935
@shin-, https://github.com/moby/moby/pull/33935#issuecomment-407433893 is suggesting that Docker Compose should switch to the BuildKit filesync API to fix this issue instead, which would mean this issue isn't blocked on moby/moby
any more. Is that something Docker Compose would want to use?
Buildkit is still in experimental mode only and subject to change. Once it's stable and enabled for everyone we can take a closer look for sure.
Given that docker-compose vs docker cache issue was merged into this one, and by comparison fig is kinda old, could this ticket perhaps be renamed?
@dimaqq done 馃憤
With Docker 18.09 release, Buildkit is out of experimental and into an optional feature toggle, could we take another look at adding support to compose? What will it take; where shall we look to contribute if we wanted to help out?
for those of us that use docker-compose build
and have seen the improvements done on buildkit, we are really interested on seeing docker-compose getting support for it.
buildkit is not out of experimental.
cache-from
is still not supported when using buildkit as a backend, so you might not be able to use the same docker-compose file anyway
@dreyks where can i read more about using cache-from
in buildkit?
thanks
@FernandoMiguel there's no documentation that i could find but here's a comment from a relevant issue https://github.com/moby/buildkit/issues/723#issuecomment-440490685
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Does anyone know if the issue has been solved?
This issue has been automatically marked as not stale anymore due to the recent activity.
I am experiencing this issue. Here my versions:
docker-compose version 1.25.0-rc4, build 8f3c9c58
Docker version 19.03.4, build 9013bf5
Note: 1.25 will come with COMPOSE_DOCKER_CLI_BUILD
variable to delegate the build to docker CLI. Delegating to CLI would in the future prevent such side effect of a implementation glitches in client library implementations.
@ndeloof I just tried setting the environment variable you speak of, and now I can build the same image with THREE different hashes.
shell script
docker build .
docker-compose up --build
COMPOSE_DOCKER_CLI_BUILD=true docker-compose up --build
This results in 3 fully functional docker images, all with different hashes.
I think it has something to do when using a COPY
command on a directory.
I get cache hits for all of my Dockerfile steps until the first COPY
command that is a directory (COPY commands that are individual files work and cache hit correctly)
I see the same result as @JeremyLoy
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not recent activity during the stale period.
This issue should be left open
This is still an issue
I created a new issue as this one is being ignored: https://github.com/docker/compose/issues/7905
Most helpful comment
With Docker 18.09 release, Buildkit is out of experimental and into an optional feature toggle, could we take another look at adding support to compose? What will it take; where shall we look to contribute if we wanted to help out?