I have a scenario where a container should only be "kept running" in development. In production, it just compile some files and exit. Is there a way for me to make a container wait for another to exit, before initializing?
docker-compose run service && docker-compose up
Actually, I need this to be internal to docker-compose. Wait? I have some services that build some stuff, then exit, when in production mode. In dev mode, they are actual servers. That is why, they're actual containers. So, in production, I need to wait for some containers to end, before starting another.
That very much sounds like an anti-pattern, but regardless, the line I suggested earlier should still work.
This sounds like a job for environment variables!
But seriously folks, without knowing about the environment, it sounds like you need to grab (or make) configs/data with container 1, and then use/run with that in container 2.
In which case, you should have your application restart itself gracefully until it can find said files from container 1 (I am assuming they are volumed in, probably from volumes_from).
But without more detail, it's tough to guess how it can be done better.
Fair enough, guys. A concrete example: https://github.com/italomaia/flask-vue-semantic-docker
In this repository, I have a docker-compose setup which creates a environment with flask (web application), nginx (http server), postgres, semantic-ui (html styles) and ux (VueJS responsive SPA). The thing is that semantic-ui css files have to be built before being imported by my ux container, because the ux container "merges" the output from semantic-ui into its own files. On the other hand, my nginx http server has to wait for my ux container to build its own files before serving them. After building the files, my ux AND styles container end, because building the files was all they had to do.
You might ask: "why keep them in separate containers if all they do is build files?" Well, in my local machine, while I'm developing the project, they actually serve the files they "build" on-the-fly, so I can change the files and see the change without restarting or "build-waiting" anything.
If my ux container could wait for my styles container to build the files, that would give me a guarantee that my ux files are perfect.
I get what are you saying, but I feel you are looking in the wrong place for the answer. Permit me to explain how I solved my issue.
I fully get the separate containers, I do the same. I also took caution to this warning: https://docs.docker.com/compose/startup-order/
In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, your application should attempt to re-establish a connection to the database after a failure. If the application retries the connection, it should eventually be able to connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
This gave me an idea. I needed my confd to download my config. Couldn't start up nginx without a website config, amirite?
Long story short (too late), after my confd container finishes downloading, I touch /ready.confd
. Then, I changed the docker-compose.yml
command
for my nginx service to this:
/bin/sh -c while [ ! -f '/ready.confd' ]; do echo 'waiting for container to become ready...'; sleep 1; done; nginx
If you control the entrypoint.sh
, which from the looks of your code, you do, it's EASY. If not, then it's harder.
It all boils down to being resilient. By waiting for the domino effect of containers, you are not resilient. You just need to throw in a few safeguards and then you can start them OUT of order and still work. The chaos monkey will not wait for your containers to start up in the right order, you shouldn't either. :)
I am NOT minimizing your request, I just want to help.
Very interesting. I'll try both approaches and see how it goes.
Hello @italomaia,
I just solved this by creating a link and pinging it until it exited:
Something along the lines of:
services:
container1:
...
container2:
depends_on:
- "container1"
links:
- "container1:container1"
command: bash -c "while ping -c1 container1 &>/dev/null do sleep 1; done; echo "Container 1 finished!" && ./my_application.sh"
I know my question doesn't fully relate to the original question, but I write it here, because this dilemma made me end up at this issue.
I can't find a clear answer how to do development using locally mounted volumes right:
I have some code that I mount as a volume from the local machine for development. Something like:
services:
app:
volumes:
- .:/app
...
db:
...
It's a typescript nodejs project so I need a few steps to have the dev environment working:
and then run things in parallel to wa:
I thought I could do all these together with a nice docker-compose file, but I can't find a nice way to wait until the compilation and the init steps finish. So I ended up with a solution where both the Dockerfile and docker-compose file are almost empty, having no commands at all, I only have the containers configured with the necessary env vars. And for development I run in separate tabs the database, the app code, the compiler and the tests with commands like:
tab1: docker-compose run --service-ports --use-aliases db
tab2: docker-compose run app yarn tsc:watch
tab3: docker-compose run app yarn test:watch
tab4: docker-compose run app yarn start:debug
I also managed to get a different solution working where I defined the watching services in the compose file, something like:
services:
init:
command: >
sh -c '
yarn &&
yarn tsc &&
yarn db:create
'
restart: no
depends_on:
- db
...
app:
volumes:
- .:/app:z
command: yarn start:debug
depends_on:
- init
restart: on-failure
...
test:
volumes:
- .:/app:z
command: yarn test:watch
depends_on:
- init
restart: on-failure
...
db:
...
But then I get about 300 error/warning messages before the whole system gets up and running. I the app and test containers could wait for the init command to finish, then it could work smoothly without errors.
Are there any better/recommended ways to do support the development mode? When one just starts docker-compose and no need to go through these steps manually?
I have a different use-case.
I have an extra job being run in a container, and it waits for the relevant containers to start up -- no problem there. And other containers are usually still starting up long after this extra job is completed.
But in CI deploy, I want the CI job to wait until the extra container is completed successfully before CI declares that the deploy is green. It isnt a 'healthcheck' - it is a deeper analysis than is sensible in a healthcheck, and does extra tasks like a db backup.
I am aware of how to solve this, but afaik there is no nice solution provided by docker-compose
.
Most helpful comment
Hello @italomaia,
I just solved this by creating a link and pinging it until it exited:
Something along the lines of: