I use docker compose inside running container , I use docker file like this:
FROM node:4
COPY . /srv/deploy_service
WORKDIR /srv/deploy_service
RUN npm install && \
apt-get update && \
apt-get install -y python python-dev python-distribute python-pip && \
apt-get install curl -y && \
curl -sSL https://get.docker.com/ | sh && \
pip install docker-compose
EXPOSE 3000
CMD bash -c "node deploy_service.js"
After build image i use docker-compose up -d deployment-service
and after that docker-compose exec deployment-service
.
Now i am in deployment-service container.
version: "2"
services:
nginx:
image: jwilder/nginx-proxy
restart: always
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/vhost.d:/etc/nginx/vhost.d:ro
deployment-service:
build: .
restart: always
container_name: deployment_service
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- "3000"
depends_on:
- nginx
clix-core-production:
image: ${REGISTRY_URL}/${REGISTRY_USER}/clix-core:production
environment:
- NODE_ENV=production
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASS=${MYSQL_PASSWORD}
- MONGO_USER=${MONGO_USER}
- MONGO_PASS=${MONGO_PASSWORD}
expose:
- "8001"
links:
- mysql:mysql
- mongo:mongo
- redis:redis
depends_on:
- nginx
inside the container i use docker-compose up -d clix-core-staging
but even though nginx is up but recreate nginx.
I guess this issue related to directory name , outside of container nginx start in /home/david/deploy_service but inside the container my docker-compose file is in
/srv/deploy_service.
Thanks.
UPDATE 1.
after docker-compose up -d nginx
inside container this error returned:
ERROR: for nginx Cannot start service nginx: oci runtime error: rootfs_linux.go:53: mounting "/var/lib/docker/devicemapper/mnt/d73fe939a2d6071087d11b49e444ac7a453a7ea50447f1d888715c5b9a536b44/rootfs/etc/nginx/nginx.conf" to rootfs "/var/lib/docker/devicemapper/mnt/d73fe939a2d6071087d11b49e444ac7a453a7ea50447f1d888715c5b9a536b44/rootfs" caused "not a directory"
ERROR: Encountered errors while bringing up the project.
Any thoughts on this one?
I have a similar problem. I've followed the _docker-in-docker_ solution here: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
And mount bound the docker socket. However, docker-compose does not work. I guess it's because of the docker compose network which is not available inside the docker container or something with the directory structure.
Does anyone have an advice for that?
What I'm trying to achieve is, that I can start and stop a container which is part of a docker compose setup from inside one of those containers...
@timofurrer I'm not sure what I'm doing is 💯 the same as you, but I'm using docker in docker using the -v /var/run/docker.sock
way and had a similar issue with my networks. I've passed a --net host
into my docker run
and it _seems_ to be ok.
@timofurrer - I was able to do this by :ro
mounting the host directory where my compose files are and either use -f docker-compose.yml
or cd to the respective directory prior to running the docker-compose
command.
I personally use following when I want to use docker and docker-compose inside a dev image
FROM alpine
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
it works
@jancajthaml FYI docker/compose defies convention and does make use of the latest
tag. As of this writing, you can use the 1.22.0
tag.
There is a suggestion to adopt the convention of tagging with latest
.
@wyckster agreed that its better to use fixed version, but as a POV and as for topic on hand latest
would to the trick
Hey,
I am trying to build my own container which, among other things need to run docker-compose. I followed @jancajthaml instruction above but when I run docker-compose (or docker-compose version) I get this error message:
/bin/sh: docker-compose: not found
here is my dockerfile (simplified to the point it is the same, except for the presence of python, which my app will use):
FROM python:3-alpine
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:1.23.2 /usr/local/bin/docker-compose /usr/bin/docker-compose
ENTRYPOINT [ "/bin/sh" ]
I guess I am missing something so I ran ldd on docker-compose:
/ # ldd /usr/bin/docker-compose
/lib64/ld-linux-x86-64.so.2 (0x7fef661d5000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7fef661d5000)
libz.so.1 => /lib/libz.so.1 (0x7fef661bb000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fef661d5000)
Error relocating /usr/bin/docker-compose: __strcat_chk: symbol not found
Error relocating /usr/bin/docker-compose: __snprintf_chk: symbol not found
Error relocating /usr/bin/docker-compose: __vfprintf_chk: symbol not found
Error relocating /usr/bin/docker-compose: __stpcpy_chk: symbol not found
Error relocating /usr/bin/docker-compose: __vsnprintf_chk: symbol not found
Error relocating /usr/bin/docker-compose: __strncpy_chk: symbol not found
Error relocating /usr/bin/docker-compose: __strcpy_chk: symbol not found
Error relocating /usr/bin/docker-compose: __fprintf_chk: symbol not found
Error relocating /usr/bin/docker-compose: __strncat_chk: symbol not found
When running natively on the same host where the container is running (so the exact same kernel), ldd shows that the kernel injected linux-vdso.so.1
ldd /usr/local/bin/docker-compose
linux-vdso.so.1 => (0x00007ffdf5f83000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9023aaf000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9023895000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f90234cb000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9023cb3000)
but otherwise the same libraries are used.
Any idea what is going on?
Found the solution (actually someone on slack). Not use alpine but ubuntu instead. docker-compose is compiled against glibc, not musl.
I got docker and docker-compose working in my container, but docker-compose craps out when trying to do docker-compose up
. My docker-compose file just contains a wiremock container for now (just trying to get that working first), which looks like this:
version 3.5
services:
wiremock:
container_name: wiremock
image: jre:8-jre
ports:
- "8089:8080"
volumes:
- ./resources/wiremock:/wiremock
command: java -jar /app/wiremock/wiremock.jar --root-dir /app/wiremock --verbose
where, the wiremock.jar is available in the host machine at ./resources/wiremock/wiremock.jar
Inside the container (not docker-compose, but just docker), I'm copying all the jars/files/etc. into the container using COPY
, so there are no volumes mounted between host-machine and (external) docker-container. The respective Dockerfile looks like this:
FROM ubuntu:latest
COPY --from=library/docker:latest /usr/local/bin/docker /usr/local/bin/docker
COPY --from=docker/compose:1.23.2 /usr/local/bin/docker-compose /usr/local/bin/docker-compose
COPY . /app
USER root
RUN /bin/bash -c 'apt-get update && apt-get install curl -y && apt-get install vim -y'
WORKDIR /app
I can create this container fine, can shell into it and run simple commands like docker ps -a
or docker-compose version
. But when I try to run docker-compose up
, I get the following error:
wiremock | Error: Unable to access jarfile /wiremock/wiremock.jar
wiremock exited with code 1
This is strange because the relative paths are correct and running this docker-compose outside on the host machine works perfectly (spins up Wiremock service like a charm). But whenever I try to run it within the container, I see this error.
Please help me resolve this. Shooting in the dark here, but does this have something with GID/UIDs? Or is it that docker-compose is getting confused and is not able to map the volume-paths correctly from host-container to internal docker-compose container?
You are doing
HOST -> docker -> docker-compose
so you must mount ./resources to docker that you described and then mount it via docker-compose from container. The docker-compose is now running inside container and not on your host, thus it has access to containers directories and volumes and not hosts.
Approach 1
1) mount working directory of host to /opt/workdir
of container described with dockerfile
2) use /opt/workdir
instead of ./
in docker-compose ran from said container
Approach 2
You can try to experiment with "volumes_from" in your docker-compose if you don't want to remount folders. That would be actually good approach.
volumes_from:
- container_that_hosts_docker_compose_name
Yeah, that's what was happening. If provided the absolute path, it works, but relative paths won't work. I haven't tried the volumes approach, but I assume that should work better than the absolute path. Thanks for the quick reply.
Using volume works, just confirmed with a new setup using volumes from docker-compose and it works!!! Thanks!
I had a setup where I wanted to use docker-compose from a container to start the services on the host system and here's my solution:
docker run --rm -it -v $PWD:$PWD -w $PWD -v /var/run/docker.sock:/var/run/docker.sock docker/compose:1.24.0 up
-v $PWD:$PWD
-w $PWD
-v /var/run/docker.sock:/var/run/docker.sock
This solves the not a directory
error when mounting files in the same directory.
@mdawar does this work fine without step 1, assuming the docker-compose.yml
file is already _inside_ the container?
I ask because I want to do the same thing — control the host docker daemon from within a container using compose. But I also want to bind mount directories from the host into whatever containers compose spawns. My underlying question is more along the lines of: will compose attempt to bind mount directories from within its own container or will it just ignore the fact that it's running in a container and let the docker daemon mount from the host?
Thanks!
The only things that is needed is step 3. As long as docker-compose, inside the container, has access to the yml file you are good. It will send the commands to the docker engine running on the host, outside the container. Everything must he relative to that engine; this includes volumes, etc. There is no difference, from the point of view of the definitions in the yml file, if docker-compose is running in a container or not.
@Southclaws as I remember yes it bind mounts the directories from the container and not from the host, @clauderobi are you sure? Because I remember that I had errors because I was mounting directories from the host, and as I remember docker-compose wad trying to bind these directories from the container.
I am positive. My yml file has 3 containers for which I mount/bind various directories, using a combination of simple or bind commands. In all cases the path are relative to the host and visible on the host.
Everything must he relative to that engine; this includes volumes, etc.
This is what I expected.
Maybe the confusion comes when you use relative directory paths and compose automatically converts them to absolute using the filesystem in the container? I don't do this anyway so it's not going to be a problem but maybe that's why @mdawar ran into issues.
Anyway, thanks both of you, I will give this a try soon!
I had a similar problem: running "docker-compose up" inside a container recreates all the containers that were brought up previously by running "docker-compose up" at the host. I was not expecting these containers to be recreated, because their config has not changed. It turned out I was running different software versions of docker-compose, one at the host and a different one inside the container. So when you send docker-compose commands over the docker socket, it probably identifies the versions mismatch and decide to recreate. I solved it by making both host and container use the same version.
Most helpful comment
I personally use following when I want to use docker and docker-compose inside a dev image
it works