When a build is implicitly triggered by docker-compose run
, it does not take the environment variables COMPOSE_DOCKER_CLI_BUILD
and DOCKER_BUILDKIT
into account. In case the project uses BuildKit this results in a build failure.
Output of docker-compose version
docker-compose version 1.25.4, build 8d51620a
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
Output of docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:25:46 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:24:19 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
# syntax = docker/dockerfile:experimental
and use a new feature, for example RUN --mount=type=cache,target=/tmp/foo
docker-compose run test bash
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build test
WARNING: Native build is an experimental feature and could change at any time
Building test
# The build succeeds
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose run test bash
test@46be78641925:~$ # the run command is succesful
$ docker-compose down --rmi local
Removing test_test_run_46be78641925 ... done
Removing network test_default
Removing image test_test
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose run test bash
Building tests
ERROR: Dockerfile parse error line 16: Unknown flag: mount
The build succeeds and the container is started.
The latest release (1.25.4) and release candidate (1.26.0-rc3) exhibit the same behavior.
Hi gergelyerdelyi,
It seems that this is the expected behaviour, of course a more nice message, would be nice
https://docs.docker.com/storage/bind-mounts/
"you can't use Docker CLI directly to manage bind mounts".
I would suggest to use volumes.
Hi,
It seems that this is the expected behaviour, of course a more nice message, would be nice
I think these two are unrelated. The bug I reported triggers at build time where volumes are not available at all (AFAIK).
The problem is that docker-compose run
behaves a bit differently from docker-compose up
when they each need to do an implicit build step due to amissing or outdated image. If docker-compose run
passed on (or took into account) the environment variables then it would behave the same way.
I just ran into this as well, with similar reproduction steps.
We have a Dockerfile that uses RUN --mount
and must be compiled with buildkit.
It compiles fine like this:
DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build ruby-test
... but not like this:
DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose run --rm ruby-test bash
Building ruby-test
ERROR: Dockerfile parse error line 13: Unknown flag: mount
I'm running into a similar issue.
We are using build kit
for performance reasons, way too faster.
If we run
DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build api-test
its uses build kit, then we remove the image created by the docker-compose and then we run
DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose run api-test bash
It uses the old build system. Since there is not an image and it will try to build one.
I've noticed the same problem.
This is an experimental feature, but I think implicit build steps of docker-compose run
and docker-compose up
should behave as same as possible.
The COMPOSE_DOCKER_CLI_BUILD=1
is not only for using BuildKit from docker-compose
but also a workaround for the cache problem caused by the differences between docker cli and docker-py (#883, moby/moby#18611).
I'm going to send a PR to fix this problem. It doesn't seem difficult at first sight to pass the env variable to the internal build steps if there are no other issues.
Most helpful comment
Hi,
I think these two are unrelated. The bug I reported triggers at build time where volumes are not available at all (AFAIK).
The problem is that
docker-compose run
behaves a bit differently fromdocker-compose up
when they each need to do an implicit build step due to amissing or outdated image. Ifdocker-compose run
passed on (or took into account) the environment variables then it would behave the same way.