Compose: Execute a command after run

Created on 5 Aug 2015  Â·  131Comments  Â·  Source: docker/compose

Hi,

It will be very helpful to have something like "onrun" in the YAML to be able to run commands after the run. Similar to https://github.com/docker/docker/issues/8860

mongodb:
    image: mongo:3.0.2
    hostname: myhostname
    domainname: domain.lan
    volumes:
        - /data/mongodb:/data
    ports:
        - "27017:27017" 
    onrun:
        - mongodump --host db2dump.domain.lan --port 27017 --out /data/mongodb/dumps/latest
        - mongorestore -d database /data/mongodb/dumps/latest/database

After the mongodb start, It will dump db2dump.domain.lan and restore it.

When I will stop and then start the container, onrun part will no be executed to preserve idempotency.

EDIT 15 June 2020

5 years later, Compose wan't to "standardize" specifications,
please check https://github.com/compose-spec/compose-spec/issues/84

Most helpful comment

So, to manage my docker, you suggest me to use a Script or a Makefile. So why compose was created
? We can manage, scale etc. container with script || dockerfile ?

Ok, I take this example, it's what I used to deploy my application testing environment in the CI process.

rabbitmq:
    image: rabbitmq:3.5.1-management
    environment:
        RABBITMQ_NODENAME: rabbit
    hostname: rabbitmq
    domainname: domain.lan
    volumes:
        - /data/rabbitmq/db:/var/lib/rabbitmq
    ports:
        - "5672:5672" 
        - "15672:15672"
        - "25672:25672"
        - "4369:4369"

mongodb:
    image: mongo:3.0.2
    hostname: mongo
    domainname: domain.lan
    volumes:
        - /data/mongodb:/data
    ports:
        - "27017:27017" 

appmaster:
    image: appmaster
    hostname: master
    domainname: domain.lan
    environment:
        ...
    ports:
        - "80:80" 
        - "8080:8080"
    links:
        - mongodb
        - rabbitmq

celery:
    image: celery
    hostname: celery
    domainname: domain.lan
    environment:
        ...
    links:
        - rabbitmq

After container starts, I must provision mongodb, manage queue and account in rabbitmq

What i'm doing today is a script with:

#!/bin/bash
PROJECT=appmaster
docker-compose -f appmaster.yml -p appmaster up -d
docker exec appmaster_rabbitmq_1 rabbitmqctl add_user user password
docker exec appmaster_rabbitmq_1 rabbitmqctl add_vhost rabbitmq.domain.lan
docker exec appmaster_rabbitmq_1 rabbitmqctl set_permissions -p rabbitmq.domain.lan password ".*" ".*" ".*"
docker exec appmaster_mongodb_1 mongodump --host mongo-prd.domain.lan --port 27017 --out /data/mongodb/dumps/latest
docker exec appmaster_mongodb_1 mongorestore -d database /data/mongodb/dumps/latest/database

With onrun instruction I can directly make docker-compose -f appmaster.yml -p appmaster up -d
and the yml file become more readable

rabbitmq:
    ...
    onrun:
        - rabbitmqctl add_user user password
        - rabbitmqctl add_vhost rabbitmq.domain.lan
        - rabbitmqctl set_permissions -p rabbitmq.domain.lan password ".*" ".*" ".*"

mongodb:
    ...
    onrun:
        - mongodump --host mongo-prd.domain.lan --port 27017 --out /data/mongodb/dumps/latest
        - mongorestore -d database /data/mongodb/dumps/latest/database

All 131 comments

I think these should be steps in the Dockerfile

FROM mongo:3.0.2
ADD data/mongodb/dumps/latest /data/mongodb/dumps/latest
RUN mongorestore -d database /data/mongodb/dumps/latest/database

That way you also get it cached when you rebuild.

Thanks @dnephin.
Of course I can make a Dockerfile and use it in build instead of images, or I can use docker exec.
MongoDB is just an example, you can have this example with mysql and account creation, or with rabbitmq and queue creation etc.

onrun will permits flexibility on compose orchestration, compose will read onrun list and make docker exec on each item.

The point is that putting commands to docker exec in docker-compose.yml is unnecessary when you can either do it in the Dockerfile or in the container's startup script, both of which will also make your container more useful when _not_ being run with Compose.

Alternatively, start your app with a shell script or Makefile that runs the appropriate docker and docker-compose commands.

The functionality isn't worth adding to Compose unless it would add significant value over doing either of those, and I don't think it would for the use cases you've cited.

So, to manage my docker, you suggest me to use a Script or a Makefile. So why compose was created
? We can manage, scale etc. container with script || dockerfile ?

Ok, I take this example, it's what I used to deploy my application testing environment in the CI process.

rabbitmq:
    image: rabbitmq:3.5.1-management
    environment:
        RABBITMQ_NODENAME: rabbit
    hostname: rabbitmq
    domainname: domain.lan
    volumes:
        - /data/rabbitmq/db:/var/lib/rabbitmq
    ports:
        - "5672:5672" 
        - "15672:15672"
        - "25672:25672"
        - "4369:4369"

mongodb:
    image: mongo:3.0.2
    hostname: mongo
    domainname: domain.lan
    volumes:
        - /data/mongodb:/data
    ports:
        - "27017:27017" 

appmaster:
    image: appmaster
    hostname: master
    domainname: domain.lan
    environment:
        ...
    ports:
        - "80:80" 
        - "8080:8080"
    links:
        - mongodb
        - rabbitmq

celery:
    image: celery
    hostname: celery
    domainname: domain.lan
    environment:
        ...
    links:
        - rabbitmq

After container starts, I must provision mongodb, manage queue and account in rabbitmq

What i'm doing today is a script with:

#!/bin/bash
PROJECT=appmaster
docker-compose -f appmaster.yml -p appmaster up -d
docker exec appmaster_rabbitmq_1 rabbitmqctl add_user user password
docker exec appmaster_rabbitmq_1 rabbitmqctl add_vhost rabbitmq.domain.lan
docker exec appmaster_rabbitmq_1 rabbitmqctl set_permissions -p rabbitmq.domain.lan password ".*" ".*" ".*"
docker exec appmaster_mongodb_1 mongodump --host mongo-prd.domain.lan --port 27017 --out /data/mongodb/dumps/latest
docker exec appmaster_mongodb_1 mongorestore -d database /data/mongodb/dumps/latest/database

With onrun instruction I can directly make docker-compose -f appmaster.yml -p appmaster up -d
and the yml file become more readable

rabbitmq:
    ...
    onrun:
        - rabbitmqctl add_user user password
        - rabbitmqctl add_vhost rabbitmq.domain.lan
        - rabbitmqctl set_permissions -p rabbitmq.domain.lan password ".*" ".*" ".*"

mongodb:
    ...
    onrun:
        - mongodump --host mongo-prd.domain.lan --port 27017 --out /data/mongodb/dumps/latest
        - mongorestore -d database /data/mongodb/dumps/latest/database

This would be rather useful and solves a use case.

:+1:

It will make using docker-compose more viable for gated tests as part of a CD pipeline

:+1:

This is a duplicate of #877, #1341, #468 (and a few others).

I think the right way to support this is #1510 and allow external tools to perform operations when you hit the event you want.

Closing as a duplicate

This would be very useful. I don't understand the argument of "oh you could do this with a bash script". Of course we could do it with a bash script. I could also do everything that Docker-compose does with a bash script. But the point is that there is one single YAML file that controls your test environment and it can be spun up with a simple docker-compose up command.

It is not the remit of Compose to do _everything_ that could be done with a shell script or Makefile - we have to draw a line somewhere to strike a balance between usefulness and avoiding bloat.

Furthermore, one important property of the Compose file is that it's pretty portable across machines - even Mac, Linux and Windows machines. If we enable people to put arbitrary shell commands in the Compose file, they're going to get a lot less portable.

@aanand To be fair, being able to execute a docker exec does not automatically imply x-plat incompatibility.

Apologies - I misread this issue as being about executing commands on the host machine. Still, my first point stands.

I understand your point @aanand. It doesn't seem out of scope to me, since already docker-compose does a lot of the same things that the regular docker engine already does, like command, expose, ports, build, etc. Adding the exec functionality would add more power to docker-compose to make it a true one stop shop for setting up dev environments.

@aanand the main problem for many devs and CI pipelines is to have a data very close to the production env. Like a dump from a DB. I create this ticket 1 year ago and nothing move in docker compose.

So you suggest a Makefile or a Bashcript just to run some exec https://github.com/docker/compose/issues/1809#issuecomment-128073224

What I originally suggest is onrun (or oncreate) who keep idempotency. Just run at the first start. If the container is stopped or paused, the new start will not run onrun (or oncreate)

Finally, in my git repository I will have a compose file, a dockerfile and a makefile with idempotency management (may makefile could create a statefile). Genius!

There's a big difference between command, expose, etc and exec. The first group are container options, exec is a command/api endpoint. It's a separate function, not options to the create container function.

There are already a couple ways to accomplish this with Compose (https://github.com/docker/compose/issues/1809#issuecomment-128059030). onrun already exists. It is command.

Regarding the specific problem of dumping or loading data from a database, those are more "workflow" or "build automation" type tasks, that are generally done in a Makefile. I've been prototyping a tool for exactly those use-cases called dobi, which runs all tasks in containers. It also integrates very well with Compose. You might be interested in trying it out if you aren't happy with Makefiles. I'm working on an example of a database init/load use case.

@dnephin onrun is not a simple command because you just miss the idempotency.

Let's imagine. create on container creation and will never be exec again (dump & restore).

exec:
    create:
        - echo baby
    destroy:
        - echo keny
    start:
        - echo start
    stop:
        - echo bye

If you need more examples:

Thanks for dobi, but if you need to create a tool to enhance compose, compose is bad and it's better to use a more powerfull tool.

but if you need to create a tool to enhance compose, compose is bad and it's better to use a more powerful tool.

That's like saying "if you need applications to enhance your operating system, your OS is bad". No one tool should do everything. The unix philosophy is do one thing, and do it well. That is what we're doing here. Compose does its one thing "orchestrate containers for running an application". It is not a build automation tool.

That's like saying "if you need applications to enhance your operating system, your OS is bad". No one tool should do everything. The unix philosophy is do one thing, and do it well. That is what we're doing here.

Wow I think that we reached the best bad faith.

Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.

So you can guarantee that we will never see "docker compose" wrote in Go inside in the docker monolithic binary to keep the unix philosophy ? https://www.orchardup.com/blog/orchard-is-joining-docker

To continue towards that original goal, we’re joining Docker. Among other things, we’re going to keep working on making Docker the best development experience you’ve ever seen – both with Fig, and by incorporating the best parts of Fig into Docker itself.

So in short there is no way to do things like loading fixtures with compose..? I have to say I'm surprised..
The official way is to add fixture loading to my production container? Or to write a shell script around my compose file? In the later case I could also just execute 'docker run' as I did before.

@discordianfish, If, somehow, someone would wake up to the fact that CI/CD engineers need to be able to handle life cycle events and orchestration at least at a very basic level, then who knows docker/docker-compose may actually make its way out of local development pipelines and testing infrastructure and find a place in more production environments. I'm hopeful whoever is working on the stacks will address these issues, but I won't hold my breath.

After all what needs to be done at build time may be different than what is needed at runtime, and is needed at runtime often varies by deployment environment...

It is kind of annoying work to make my external scripts aware of whether an up is going to create or start containers...

And those are things some lifecycle hooks + commands + environment variables could help with.

You see it in service management frameworks and other orchestration tools... why not in docker-compose?

You might be interested in https://github.com/dnephin/dobi , which is a tool I've been working on that was designed for those workflows.

@dnephin stop spamming this issue with your tools. We see your comment before and the answer is the same. Makefile/bash is probably better than an nth "my tool enhance docker".

Thank you for your constructive comment. I didn't realize that I had already mentioned dobi on this thread 8 months ago.

If you're happy with Makefile/bash that's great! I'm glad your problem has been solved.

Added a comment related to this topic here: https://github.com/docker/compose/issues/1341#issuecomment-295300246

@dnephin for this one, my comment can be applied:

So sad that this issue have been closed because of some refractoriness to evolution :disappointed:

The greatest value of having docker compose is standardization

That's the point. If we could "just" write a .sh file or whatever to do the job without using Docker Compose, why Docker Compose is existing? :confused:

We can understand that is a big job, as @shin- said:

it's unfortunately too much of a burden to support at that stage of the project

:heart:

But you can't say "Make a script" what means "Hey, that's too hard, we're not gonna make it".

If it's hard to do it, just say "Your idea is interesting, and it fills some needs, but it's really difficult to do and we don't have resources to do it at this time... Maybe could you develop it and ask a pull request" or something like that :bulb:

In #1341, I "only" see a way to write in docker-compose.yml commands like nmp install that would be run before or after some events (like container creation), like you would do with docker exec <container id> npm install for example.

Use case

I have a custom NodeJS image and I want to run npm install in the container created from it, with a docker-compose up --build.

My problem is: the application code is no added in the container, it's mounted in it with a volume, defined in docker-compose.yml:

custom-node:
    build: ../my_app-node/
    tty: true
    #command: bash -c "npm install && node"
    volumes:
     - /var/www/my_app:/usr/share/nginx/html/my_app

so I can't run npm install in the Dockerfile because it needs the application code to check dependencies. I described the behavior here: http://stackoverflow.com/questions/43498098/what-is-the-order-of-events-in-docker-compose

To run npm install, I have to use a workaround, the command statement:

command: bash -c "npm install && node"

which is not really clean :disappointed: and which I can't run on Alpine versions (they don't have Bash installed in it).

I thought that Docker Compose would provide a way to run exec commands on containers, e.G.:

custom-node:
    build: ../my_app-node/
    tty: true
    command: node
    volumes:
     - /var/www/my_app:/usr/share/nginx/html/my_app
    exec:
     - npm install

But it's not, and I think it's really missing!

I expected compose is designed for testing, but I'm probably wrong and it's intended more for local development etc. I ran into several other rough edges like orphaned containers and the unclear relation between project name, path and how it's used to identify ownership, what happens if you have multiple compose files in the same directory etc etc. So all in all it doesn't seem like a good fit for CI.
Instead I'm planning to reuse my production k8s manifests in CI by running kubelet standalone. This will also require lots of glue, but at least this way I can use the same declarations for dev, test and prod.

@lucile-sticky you can use sh -c in alpine.

It sounds like what you want is "build automation" which is not the role of docker-compose. Have you looked at dobi ?

Two questions:

  • Why is this not the role of Docker Compose?
  • If the point is to have only one tool to rule them all, why would I use an other tool to complete a task that Docker Compose is not able to do?

This feature is highly needed!

@lucile-sticky

Why is this not the role of Docker Compose?

Because the role of Compose is clearly defined and does not include those functions.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration

If the point is to have only one tool to rule them all, why would I use an other tool to complete a task that Docker Compose is not able to do?

We don't want to be the one tool to rule them all. We follow UNIX philosophy and believe in "mak[ing] each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features."
It's okay to disagree with that philosophy, but that's how we at Docker develop software.

I create this issue, in august 2015, each year someone add a comment and we are looping on the same questions with the same answers (and for sure you will see @dnephin making an Ad for his tool).

@shin-

You can't separate "build" and "provision" in orchestration tools.

For example, may you know one of them:

When you configure a service you have to provision it. If I deploy a tomcat, I have to provision it with a war, if I create a DB, I have to inject data etc. no matter how the container must be start (let the image maintainer manage it). The main purpose of a "provisionner" in Compose case is to avoid misunderstanding between "what start my container" and "what provision it".

Like said your quote in the compose doc "With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration"

Unix philosophy ? Let me laugh. I point you to the same answer I did in this issue https://github.com/docker/compose/issues/1809#issuecomment-237195021 .
Let see how "moby" will evolve in the Unix philosophy.

@shin- docker-compose doesn't adhere to the Unix Philosophy by any stretch of the imagination. If docker-compose adhered to the Unix Philosophy there would be discrete commands for each of build, up, rm, start, stop, etc and they would each have a usable stdin, stdout, and stderr that behaved consistently. says the unix sysadmin with over 20 years of experience including System V, HP-UX, AIX, Solaris, and Linux

Let's go back to the overview for compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration.

Ultimately, docker-compose is an orchestration tool for managing a group of services based on containers created from docker images. It's primary functions are to 'create', 'start', 'stop', 'scale', and 'remove' services defined in a docker-compose.yml file.

Many services require additional commands to be ran during each of these life cycle transitions. scaling database clusters often requires joining or removing members from a cluster. scaling web applications often requires notifying a load balancer that you have added or removed a member. some paranoid sysadmins like forcibly flush their database logs and create checkpoints when shutting down their databases.

Taking action on state transition is necessary for most orchestration tools. You'll find it in AWS's tools, Google's tools, foreman, chef, etc.. most of the things that live in this orchestration space have some sort of lifecycle hook.

I think this is firmly in the purview of docker-compose given it is an orchestration tool and it is aware of the state changes. I don't feel events or external scripts fit the use case. They're not idempotent, is much harder to launch a 'second' service next to compose to follow the events. Whether the hooks run inside the container or outside the container is an implementation detail.

At the end of the day there is a real need that is being expressed by users of docker-compose and @aanand , @dnephin, @shin- seem to be dismissing it. It would be nice to see this included on a roadmap.

This type of functionality is currently blocking my adoption of docker in my testing and production production deployments. I would really like to see this get addressed in some fashion rather than dismissed.

I think this will be very useful!

For me the problem is that when there is a app container A running service 'a' dependent on db container B running service b. Then A container fails unless its b is setup.
I would prefer to use docker hub images instead of re-writing my own Dockerfiles. But this means A fails and no container is created. Only option otherwise is to

  1. Use B as base image and create my own Dockerfile.
  2. Let A fail and configure b in script and restart A.

I've excactly the same use case as @lucile-sticky .

@lekhnath for my case, I solved it by editing the command option in my docker-compose.yml:

command: bash -c "npm install && node"

But it's soooo ugly T-T

@lucile-sticky It should be noted that this overrules any command set in the Dockerfile of the container, though. I worked around this by mounting a custom shell script using volumes, making the command in my Docker Compose file run that script, and including in it the original CMD from the Dockerfile.

Why is this issue closed? _write a bash script_ or _use this tool I wrote_ is not a valid reason to close this issue.

This is a very helpful and important feature that is required in a lot of uses case where compose is used.

@dnephin Do you think running init scripts is outside the scope of container based application deployments? after all, compose is about "define and run multi-container applications with Docker".

Have somebody looked at dobi if you haven't please do so here its :)
image

Guessing nothing ever happened with this. I'd love to see some sort of functionality within the docker-compose file where we could write out when a command should be executed such as the example @ahmet2mir gave.

Very sad to see this feature not being implemented.

Implement this feature please, I need to automatically install files after docker-compose up, as the folders where the file must be copied are created after initialization of the containers.
Thanks

It is incredible that there is no this feature implemented yet!

This is very poor form @dnephin. You have inhibited the implementation of such a highly sought after feature for what seems mostly self promotion, and you're not even willing to continue the conversation.

I am sorry, I couldn't think of a more milder language to put it, lack of this feature has added fraction to our workflow, like many many other developer and teams, and you have been a hindrance to solving this problem..

Oh, let's make it the unix-way then.
_Just_ (multiplex then) pipe docker-compose up stdin to each containers' CMD?
So that such a yaml file

services:
  node:
    command: sh -

would make this work: cat provision.sh | docker-compose up
containers are for executing things, I don't see better use of stdin than passing commands along.

An alternative could be:

services:
  node:
    localscript: provision.sh

Although a bit shell-centric that would solve 99% of provisioning use-cases.

Even though there are valid use cases, and plenty of upvotes on this... it's still apparently been denied. Shame as I, like many others here, would find this extremely useful.

Adding my +1 to the large stack of existing +'s

...another +1 here!

I think that if there is such a request for this feature it should be implemented, tools are here to help us reach our objectives and we should mould them to help us not to make our life harder.
I understand the philosophy to which someone adhere but adding some kind of "hooks commands" should not be a problem.

+1 +1

While I wait for this feature, I use the following script to perform a similar task:

docker-start.sh

#!/usr/bin/env bash

set -e
set -x

docker-compose up -d
sleep 5

# #Fix1: Fix "iptable service restart" error

echo 'Fix "iptable service restart" error'
echo 'https://github.com/moby/moby/issues/16137#issuecomment-160505686'

for container_id in $(docker ps --filter='ancestor=reduardo7/my-image' -q)
  do
    docker exec $container_id sh -c 'iptables-save > /etc/sysconfig/iptables'
  done

# End #Fix1

echo Done

@reduardo7 Then you might as well drop docker-compose altogether, that way have one less dependency.

@omeid , you are right! It's a workaround to perform a similar task, sorry!

@reduardo7 No need to apologize, what you have posted is probably going to be useful to some people.
I was just pointing out that original issue still stands and shouldn't have been closed. :)

I understand @dnephin's stands, the functions mentioned here can be replaced with sufficiently different features.

However, if such patterns are used frequently, how about presenting a guide(or some test) so that others can easily use it?

There seems to be no disagreement that this pattern can be used frequently.

@MaybeS The only disagreement is that @dnephin rather see his dopey tool promoted instead of helping make docker-compose a better product.

@omeid yes indeed.

today's example of wanting a way for compose to do some form of onrun

version: "3.3"
services:
  gitlab:
    image: 'gitlab/gitlab-ce:latest'
    restart: always
    hostname: 'gitlab'
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        # NOTE: this URL needs to be right both for users, and for the runner to be able to resolve :() - as its the repo URL that is used for the ci-job, and the pull url for users.
        external_url 'http://gitlab:9090'
        gitlab_rails['gitlab_shell_ssh_port'] = 2224
    ports:
      - '9090:9090'
      - '2224:22'
  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock

and of course, the runner isn't registered - and to do that, we need to

  1. pull the token out of the database in gitlab
  2. run register in the runner container

so instead of defining the deployment of my multi-container application in just docker-compose, I need to use some secondary means - in this case... docs?

export GL_TOKEN=$(docker-compose exec -u gitlab-psql gitlab sh -c 'psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production -t -A -c "SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1"')
docker-compose exec gitlab-runner gitlab-runner register -n \
  --url http://gitlab:9090/ \
  --registration-token ${GL_TOKEN} \
  --executor docker \
  --description "Docker Runner" \
  --docker-image "docker:latest" \
  --docker-volumes /var/run/docker.sock:/var/run/docker.sock \
  --docker-network-mode  "network-based-on-dirname-ew_default"

mmm, I might be able to hack up something, whereby I have another container that has the docker socket, and docker exec's

what's to bet there is a way ....

for example, I can add:

  gitlab-initializer:
    image: docker/compose:1.18.0
    restart: "no"
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - ./gitlab-compose.yml:/docker-compose.yml
    entrypoint: bash
    command: -c "sleep 200 && export GL_TOKEN=$(docker-compose -p sima-austral-deployment exec -T -u gitlab-psql gitlab sh -c 'psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production -t -A -c \"SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1\"') && docker-compose exec gitlab-runner gitlab-runner register -n --url http://gitlab:9090/ --registration-token ${GL_TOKEN} --executor docker --description \"Docker Runner\" --docker-image \"docker:latest\" --docker-volumes /var/run/docker.sock:/var/run/docker.sock --docker-network-mode  \"simaaustraldeployment_default\""

to my compose file - though I need some kind of loop/wait, as gitlab isn't ready straight away - sleep 200 might not be enough.

so - you __can__ hack some kind of pattern like this directly in a docker-compose.yml - but personally, I'd much rather some cleaner support than this :)

@SvenDowideit onrun already exists, it's entrypoint or cmd.

The entrypoint script for this image even provides a hook for you. $GITLAB_POST_RECONFIGURE_SCRIPT can be set to the path of a script that it will run after all the setup is complete (see /assets/wrapper in the image). Set the env variable to the path of your script that does the psql+register and you're all set.

Even if the image didn't provide this hook, it is something that can be added fairly easily by extending the image.

though I need some kind of loop/wait, as gitlab isn't ready straight away - sleep 200 might not be enough.

This would be necessary even with an "exec-after-start" option. Since the entrypoint script actually provides a hook I think it's probably not necessary with that solution.

nope, I (think) you've missed a part of the problem I'm showing:

in my case, I need access into both containers, not just one - so entrypoint / command does _not_ give me this.

GL_TOKEN comes from the gitlab container, and is then used in the gitlab-runner container to register.

so the hack I'm doing, is using the docker/compose image to add a third container - this is not something you can modify one container's config/entrypoint/settings for, and is entirely a (trivial) example of a multi-container co-ordination that needs more.

I've been working on things to make them a little more magical - which basically means my initialisation container has some sleep loops, as it takes some time for gitlab to init itself.

TBH, I'm starting to feel that using a script, running in an init-container that uses the compose file itself and the docker/compose image, _is_ the right way to hide this kind of complexity - for the non-production "try me out, and it'll just work" situations like this.

_IF_ i were to consider some weird syntactical sugar to help, perhaps I'd go for something like:

gitlab-initializer:
    image: docker/compose:1.18.0
    restart: "no"
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - ./gitlab-compose.yml:/docker-compose.yml
    entrypoint: ['/bin/sh']
    command: ['/init-gitlab.sh']
    file:
      path: /init-gitlab.sh
      content: |
            for i in $(seq 1 10); do
                export GL_TOKEN=$(docker-compose -f gitlab-compose.yml -p sima-austral-deployment exec -T -u gitlab-psql gitlab sh -c 'psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production -t -A -c "SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1"')
                echo "$i: token($?) == $GL_TOKEN"
                ERR=$?

                if [[ "${#GL_TOKEN}" == "20" ]]; then
                    break
                fi
                sleep 10
            done
            echo "GOT IT: token($ERR) == $GL_TOKEN"

            for i in $(seq 1 10); do
                if  docker-compose -f gitlab-compose.yml  -p sima-austral-deployment exec -T gitlab-runner \
                    gitlab-runner register -n \
                    --url http://gitlab:9090/ \
                    --registration-token ${GL_TOKEN} \
                    --executor docker \
                    --description "Docker Runner" \
                    --docker-image "docker:latest" \
                    --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' \
                    --docker-network-mode  "simaaustraldeployment_default" ; then
                        echo "YAY"
                        break
                fi
                sleep 10
            done

ie, like cloud-init: http://cloudinit.readthedocs.io/en/latest/topics/examples.html#writing-out-arbitrary-files

but when it comes down to it - we _have_ a solution to co-ordinating complicated multi-container things from inside a docker-compose-yml.

If you're able to set a predefined token, you could do it from an entrypoint script in gitlab-runner. Is there no way to set that head of time?

@dnephin The moment you mention script, you're off the mark by a light year and then some.

onrun is not the same as entrypoint or cmd.

The entrypoint/cmd is for configuring the executable that will run as the containers init/PID 1.

The idea mentioned in this and many related issue is about init scripts, which is different from init in the context of booting, and is about application init scripts, think database setup.

@dnephin it'd probably be more useful if you focused on the general problem-set, rather than trying to work around a specific container-set's issues.

From what I've seen though, no, its a generated secret - but in reality, this is not the only multi-container co-ordination requirement in even this small play system is likely to have - its just the fastest one for me to prototype in public.

How is it possible that we have been able to override entrypoint and command in a compose file since v1 (https://docs.docker.com/compose/compose-file/compose-file-v1/#entrypoint) and still don't have a directive such as onrun to run a command when the containers are up?

TBH, I don't really think onrun is plausible - Docker, or the orchestrator doesn't know what "containers are all up" means - in one of my cases, the HEALTHCHECK will fail, until after I do some extra "stuff" where I get info from one container, and use it to kick of some other things in other containers.

And _if_ I grok right, this means I'm basically needing an Operator container, which contains code that detects when some parts of the multi-container system is ready enough for it to do some of the job, (rinse and repeat), until its either completed its job and exits, or perhaps even monitors things and fixes them.

And this feels to me like a job that is best solved (in docker-compose) by a docker-compose container with code.

I'm probably going to play with how to then convert this operator into something that can deal with docker swarm stacks (due to other project needs).

I'm not entirely sure there is much syntactic sugar that could be added to docker-compose, unless its something like marking a container as "this is an operator, give it magic abilities".

It's clearly seen that developers do not want listening to users.. I'll look at some other tool... docker-compose is a big pain.. I do not understand why you can't understand that the only useful thing that comes from docker-composer is a build tool... I spent a lot of time to searching HOW can I run SIMPLE command to add permissions inside of a container to active user..

It seems that docker-composer has simply NOT DONE state...

I too want something that will onrun in my compose file

__BUT__, neither containers, nor compose have a way to know what onrun means. This is why the operator pattern exists, and why I made the examples in https://github.com/docker/compose/issues/1809#issuecomment-362126930

it __is__ possible to do this today - in essence, you add an onrun service that waits until whatever other services are actually ready to interact with (in gitlab's case, that takes quite a bit of time), and then do whatever you need to do to co-ordinate things.

If there _is_ something that doesn't work with that, please tell us, and we'll see if we can figure out something!

I too want something that will onrun in my compose file

BUT, neither containers, nor compose have a way to know what onrun means.

As I see it, onrun per service, means when the first container process starts. In a larger number of cases, the container is only running one process anyway, as this is the recommended way of running containers.

The issue of cross-platform support was solved earlier, as the command can be completely OS agnostic through docker exec, in the same way that RUN does not have to mean a linux command in Dockerfile.
https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/manage-windows-dockerfile

Still waiting for onrun feature

I need this onrun features too, I thought it was in this tool. Because of this lacking feature now I need to maintain 2 scripts man.

Guys, what if I made a wrapper around this docker-compose and allow this onrun feature? Would you guys use it?

@wongjiahau may be something like this? https://github.com/docker/compose/issues/1809#issuecomment-348497289

@reduardo7 Yes, I thought of wrapping it inside a script called docker-composei, and with the docker-composei.yml which contain the onrun attribute.
Btw, docker-composei means docker-compose improved.

The real solution is probably to build a 'Orchestrator' image that runs and manages (via bash scripts) the 'App Images' (possibly using docker) internally. Otherwise we will always be asking for more features for a tool that "isn't meant to do what we want it to do".

So we should even consider Docker within Docker...

just to add my support for this proposed feature. onrun does make sense, but to broaden the potential utility and future proof it a bit, perhaps someone needs to look at a more broader 'onevent' architecture, one of which would be onrun.

Given the prevailing direction for containers to be self-contained, one-service per container, the container must be self sufficient in terms of its operating context awareness. What flows from that the compose file should be the medium for defining that, not bolt-on scripts. Hard to argue against that, unless you are some self-absorbed zealot.

In my case my redis containers load lua scripts after the the redis server has started. In normal non container environment I get systemd to run a post-startup script. Simple and consistent with systemd architecture. Similar principle should exist for compose given its role in setting up the context for the containers to run .

As a general advice to the maintainers, please focus on proven operating principles not personal preferences.

so the solution (after reading all this thread) is to use a bash script to do the job... in that case i'll remove docker-compose (we can do everything with the docker cmd...)

thx dev to listen to people who are using your things :)

By seeing the amount of messages containing arguments and counterarguments fighting simple propositions (such as having an onrun event) my first honest impression is that Github Issues has turned to be a place where _owners_ (project developers) showcase their egos and smartness by means of using their knowledge and technical argon to oppose intelligent contribution from the users.

Please, let's make Open Source truly _open_.

any updates on this feature? what is the problem?

@v0lume I'm guessing you didn't bother to actually read the responses throughout this article

There still doesn't seem to be a solution... I'd like to share a hacky workaround though.
By specifying version "2.1" in the docker-compose.yml you can abuse the healthcheck test to run additional code inside the image when it is started. Here is an example:

version: '2.1'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
        healthcheck:
            test: |
                curl -X PUT elasticsearch:9200/scheduled_actions -H "ContentType: application/json" -d '{"settings":{"index":{"number_of_shards":'1',"number_of_replicas":'0'}}}' &&
                curl --silent --fail localhost:9200/_cat/health ||
                exit 1
            interval: 11s 
            timeout: 10s 
            retries: 3
        environment:
            - discovery.type=single-node
            - ES_JAVA_OPTS=-Xms1g -Xmx1g
            - xpack.security.enabled=false
    main:
        image: alpine
        depends_on:
            elasticsearch:
                condition: service_healthy

If the healthcheck-test script you write exits with code >=1 it might get executed multiple times.
The healthcheck of a service will only be executed if another service depends on it and specifies the service_healthy condition as seen in the example.

I like @T-vK approach and have used it successful before. But I'd like to share another ... hack:

# Run Docker container here

until echo | nc --send-only 127.0.0.1 <PORT_EXPOSED_BY_DOCKER>; do
  echo "Waiting for <YOUR_DOCKER> to start..."
  sleep 1
done

# Do your docker exec stuff here

+1
I totally agree on this because the feature is needed and it is already implemented by other docker orchestrators like kubernetes. It already has lifecycle hooks for containers and is documented here.

But let me contribute a use case that you cannot resolve with Dockerfiles.

Let's say you need to mount a volume at runtime and create a symbolic link from your container to the volume without previously knowing the exact name of the directory. I had a case that the dir name was dynamic depending on the environment I was deploying on and I was passing it as a variable.

Sure I found a workaround to solve this and there is more than one. On the other hand hooks would give me the flexibility and a better approach to dynamically make changes without the urge to hack things and replace the Dockerfile.

I'm glad to have found this issue. I have been toying around with Docker and Docker compose for a couple of years. Now seriously was hoping to use it as a tool to start scaling a system. I will check back every year or two, but based on the attitude of the project maintainers, I will simply get by using either scripts, or some other tool. Glad to not have invested much time and found this one out early on.

Pro Tip: If someone who's just starting to move their workflow across to this type of tool is already in need of what's described here, might be worth re-thinking about 'why' your building this. Yes, you're successful, but it because people used the thing in the first place, and you were probably super open to giving them what they needed.

All the best.

I'm able to give you whatever you want (except my girlfriend) if this feature is implemented it and I will be the person happiest in all universe :)

just to add my support for this proposed feature. onrun does make sense, but to broaden the potential utility and future proof it a bit, perhaps someone needs to look at a more broader 'onevent' architecture, one of which would be onrun.

That'd be nice.

To add to this, given the following:

services:
    web:
        image: node:8-alpine
        depends_on:
            - db
    db:
        image: postgres:alpine
        onrun: "echo hi"

would it be too much to add cross-event scrips?

    web:
        events:
            db_onrun: "connectAndMigrate.sh"

In my opinion adding this to docker-compose is straightforward that not only you, who are using compose file and compose stack but also other developers in your team.

  • Using separate containers - everyone should know that they should run them.
  • Write custom Dockerfile - we have around 20 services and for every service I should override Dockerfile to run some command.

We need to install and configure mkcert, for instance, on every environment to have trusted certificates. It is not a part of container or Dockerfile as it is not needed on stage/production. What is the proper approach here to install the tool and everybody who is using compose file even have no clue what is going behind the scenes?

Adding another use case:

Needed a wordpress instance. Wrote my docker-compose.yaml. docker-compose up – Oops! Need to set the file permissions of the plugins directory... Can't find any other way to make it work, gotta set the permissions after the container is running because I'm binding some files from the host and it seems the only way to fix the fs permissions is by doing chown -Rf www-data.www-data /var/www/wp-content from inside the container. Write my own Dockerfile and build, just for this? That seems stupid to me.

Fortunately for me, the healthcheck hack provided above allowed me to implement this. I see other pages on the web talking about the issue of settings permissions on docker volumes, but the suggested solutions didn't work.

Glad to see that these gatekeepers, @dnephin, @aanand, @shin-, are getting a ton of heat for this. It really speaks volumes when an entire community screams as loudly as possible, and the core developers just sit back, hold their ground, and refuse to listen. So typical too. Let us count not just the number of thumbs up, but also the 34 users who replied to say that this is needed:
01) sshishov
02) fescobar
03) sandor11
04) web-ted
05) v0lume
06) webpolis
07) Skull0ne
08) usergoodvery
09) wongjiahau
10) MFQ
11) yosefrow
12) bagermen
13) daqSam
14) omeid
15) dantebarba
16) willyyang
17) SharpEdgeMarshall
18) lost-carrier
19) ghost
20) rodrigorodriguescosta
21) datatypevoid
22) dextermb
23) lekhnath
24) lucile-sticky
25) rav84
26) dopry
27) ahmet2mir
28) montera82
29) discordianfish
30) jasonrhaas
31) fferraris
32) hypergig
33) sunsided
34) sthulb

And the number who said no? A whopping 3:
01) dnephin
02) aanand
03) shin-

Hmmm... 34 to 3...

@rm-rf-etc good analytics... I don't even think @dnephin or @aanand are working on docker-compose anymore. With luck, Docker is planning to deprecate compose in favor of stacks and there won't be a team left here to complain about and we'll start seeing forward progress on the product again.

Adding another use case:

Needed a wordpress instance. Wrote my docker-compose.yaml. docker-compose up – Oops! Need to set the file permissions of the plugins directory... Can't find any other way to make it work, gotta set the permissions after the container is running because I'm binding some files from the host and it seems the only way to fix the fs permissions is by doing chown -Rf www-data.www-data /var/www/wp-content from inside the container.

In this case, you could also set the user property in your Compose file

Write my own Dockerfile and build, just for this? That seems stupid to me.

Seems like you've formed a strong opinion ; but realistically, there'd be nothing "stupid" about writing a Dockerfile to modify a base image to fit your needs. That's the original intent of all base images.

Fortunately for me, the healthcheck hack provided above allowed me to implement this. I see other pages on the web talking about the issue of settings permissions on docker volumes, but the suggested solutions didn't work.

Glad to see that these gatekeepers, @dnephin, @aanand, @shin-, are getting a ton of heat for this.

Yeah, good attitude mate. :D


@rm-rf-etc good analytics... I don't even think @dnephin or @aanand are working on docker-compose anymore.

Yeah, it's been a few years now - no need to keep pinging them on old issues.

With luck, Docker is planning to deprecate compose in favor of stacks and there won't be a team left here to complain about and we'll start seeing forward progress on the product again.

🙄

@shin- but you just pinged it with that response

I recently ran into this issue again and even though it can be done as seen in my workaround, this only works if you specify 2.1, which stinks imo.

It's just mind-boggling to me that the official stance seems to be that you should create your own docker images for everything.
To me this is literally like saying "If you want to change a setting in any program, you have to modify the source code and recompile it.".
Every time you add a new service or you want to upgrade to a newer version of .. for example the MongoDB or MySQL Docker image, you'd have to make a new Dockerfile, build it and potentially push it into your registry.
This is a massive waste of time and resources compared to how it would be if you could just change image: mongo:3.0.2 to image: mongo:3.0.3 in your docker-compose.yml.
I'm not ranting about long build times, I'm ranting about the fact that you have to bother with Dockerfiles and docker build when all you want is to update or change a parameter of a service that is potentially not even meant to be used as a base image.

And the argument that every application should do one thing and one thing only, really stinks too. This is not even about implementing a completely new feature this is just about passing another parameter through to docker. It also begs the question why docker run, docker build, docker exec, docker pull etc. are all part of the same application. The argument sound kind of hypocritical now, doesn't it?

@shin-, I followed your link and I don't see how the user property is relevant to setting the owner of a bind mounted directory. Seems to be related to ports.

Re: attitude: Looks like people agree with me, so take it as strong feedback. Sorry if you don't like how I'm expressing this, but it just really seems that the user demands are being ignored, so what else do you expect?

I came here hoping for the functionality such as the onrun: being suggested as I am only two days into using compose and to me a tool like this should have this functionality.

Going back to my docker files to update each with a separate script for the features seems redundant. I merely want to inject a token from a another container into an environment variable where my dockerfile was flexible before is now tightly coupled to the docker-composer.yml and solution for a simple purpose.

Damn, I read the entire thread hopping to find the answer "ok guys, we finally realized that this is cool and we will implement it". Sad to see this didn't move forward.
+1 to onrun!

@fabiomolinar, There is one sort of solution, that we use extensively in our production swarms, but it's not quite as nice as having an event.

We use the following anchor

#### configure a service to run only a single instance until success
x-task: &task
  # for docker stack (not supported by compose)
  deploy:
    restart_policy:
      condition: on-failure
    replicas: 1
  # for compose (not supported by stack)
  restart: on-failure

to repeat tasks until they're successful. We create containers for migrations and setup tasks that have idempotent results and run them like this in our local compose and in our stacks.

The service which depends on the task needs to fail somewhat gracefully if the configuration work isn't complete. In most cases as long as you're okay with a few errors banging out to end users, this gives you an eventual consistency that will work well in most environments.

It also assumes your service containers can work with both pre and post task completion states. In use-cases like database migrations, dependent services should be able to work with both pre-and post-migration schemas.. obviously some thought must be put into development and deployment coordination, but that is a general fact of life for anyone who is doing rolling updates of services.

@fabiomolinar, here is an example of how we use this approach in our compose services...

#### configure a service to run only a single instance until success
x-task: &task
  # for docker stack (not supported by compose)
  deploy:
    restart_policy:
      condition: on-failure
    replicas: 1
  # for compose (not supported by stack)
  restart: on-failure

#### configure a service to always restart
x-service: &service
  # for docker stack (not supported by compose)
  deploy:
    restart_policy:
      condition: any
  # for compose (not supported by stack)
  restart: always

services: 
  accounts: &accounts
    <<: *service
    image: internal/django
    ports:
      - "9000"
    networks:
      - service
    environment:
      DATABASE_URL: "postgres://postgres-master:5432/accounts"
      REDIS_URL: "hiredis://redis:6379/"

  accounts-migrate:
    <<: *accounts
    <<: *task
    command: ./manage.py migrate --noinput

Thanks for pointing that out @dopry. But my case was somewhat simpler. I needed to get my server running and then, only after it's up and running, I needed to do some deployment tasks. Today I found a way to do that by doing some small process management within one single CMD line. Imagine that the server and deploy processes are called server and deploy, respectively. I then used:

CMD set -m; server $ deploy && fg server

The line above sets bashes' monitor mode on, then it starts the server process on the background, then it run the deploy process and finally it brings the server process to the foreground again to avoid having Docker killing the container.

While we discuss this, anyone have any tip on how to run a command on container or the host upon running docker-compose up?

I understand that running any command on the host would compromise the layers of security, but I just would like to rm a directory prior or during startup of a container. Directory is accessible on both host and the container. I don't want to make a custom Docker image or have a script that first rm and then run docker-compose.

Thanks!

@fabiomolinar, The approach your propose violates a few 12 factor app principals. If you're containerizing your infrastructure, I'd strongly recommend adhering closely to them.

Some problems that could arise from your approach

  1. slow container start-up.
  2. when scaling a service with the container, deploy will run once for every instance, potentially leading to some interesting concurrency problems.
  3. harder to sort logs from the 'task' and service for management and debugging.

I did find the approach I am recommending counter-intuitive at first. It has worked well in practice in our local development environments under docker-compose, docker swarms, and mesos/marathon clusters. It's also effectively worked around the lack of 'onrun'.

The approach I have used is indeed very ugly. I used it for a while just to get my dev environment running. But I have changed that already to use entrypoint scripts and the at command to run scripts after the server is up and running. Now my container is running with the correct process as the PID 1 and responding to all signals properly.

We still need this. I can't find a way, how I could execute my database rollups after successfully started container without making it in a bunch of Makefiles.

@victor-perov create another container for the roll-up task and execute it as separate service

Here are some snippets from one of our projects to show a task service to run a database migration.

x-task: &task
  # run once deploy policy for tasks
  deploy:
    restart_policy: 
      condition: none
    replicas: 1

service:
  automata-auth-migrate:
    <<: *automata-auth
    <<: *task
    # without the sleep it can't lookup the host postgres.  maybe the command is ran before the network set is complete.
    command: sleep 5 && python /code/manage.py migrate --noinput

Well, this is the fourth year this discussion has been stretched to. So let me add my +1 to this use case of a need for onrun. P.S.: I should've bought popcorn for the whole thread.

I, too, would think onrun or equivalent (post-run?) is a must. Adding a wrapper script and doing docker exec into the container is just... ugly.

IMO docker compose was a great container orchestration MVP to convince people that managing containers can be easy. Maybe we, the community, should consider it to be in "maintenance mode" as production-ready orchestration solutions (i.e. kubernetes) have proliferated. When you have advanced features like container dependencies, combined with absent features such as "exec this thing after the container is up", it seems to fit the narrative that the pace of development has simply plateaued. At the very least, it is not obvious that this feature _should be_ considered out of scope.

You cannot do everything easily with Dockerfile. Let's say you want to add your own script to a container.
For example take the mysql container and try to add a simple script to call an API in case of some event.
You can do it either by:

  • Changing the Dockerfile of mysql and add your own script to the container before the entrypoint. You cannot add a CMD in the Dockerfile, since it would be an argument to the ENTRYPOINT.
  • You can run the container and then copy your script to the running container and run it [docker cp, docker exec].

So that's why I also think a feature like onrun is beneficial since changing the Dockerfile is not always enough.

Dump, why this is closed? Consider situation, when you are using official docker image, like Cassandra and you need to load schema after it's started... Have to implement your own bash script solution for this... ugh, this is ugly

@somebi looks like compose is closed...

Just my two cents: I landed here because I am currently having to enable Apache modules manually every time I start the container (SSL isn't enabled by default in the Docker Hub wordpress image). Not the end of the world but was hoping to run a couple of commands whenever it goes up so I can just seamlessly take the containers up and down without having to bash in.

Just my two cents: I landed here because I am currently having to enable Apache modules manually every time I start the container (SSL isn't enabled by default in the Docker Hub wordpress image). Not the end of the world but was hoping to run a couple of commands whenever it goes up so I can just seamlessly take the containers up and down without having to bash in.

Well this could be easily resolved if you build a new image based on the wordpress image, that has the modules you need enabled. Then use that instead for e.g. a dockerfile:

FROM wordpress:php7.1
RUN a2enmod ssl

Another solution would be to download the wordpress Dockerfile and add the module activation in it. Then produce a new image for yourself using docker build. For e.g. this is the Dockerfile for wordpress 5.2 with php 7.1:

wordpress dockerfile

you may enable more modules in line 63 or run ssl genaration.

All this is not the case that I think we are discussing here. The problem is creating dynamic hooks in the container lifecycle like when it starts ends etc.

This would be a nice addition to docker-compose !

Answers like the ones on this thread are the reason Kubernetes is keeping "all" the money Docker (technology) is producing, and it's not a bad thing hopefully someone will buy Docker (company) soon and change the way community proposals/request are welcome/analysed...

Answers like the ones on this thread are the reason Kubernetes is keeping _"all"_ the mony Docker (technology) is producing, and it's not a bad thing hopefully someone will buy Docker (company) soon and change the way community proposals/request are welcome/analysed...

I wrote a similar critic, without any offensive statement (it was along the lines of _open source projects which are not entirely open source whose maintainers defiantly ignore arguments without any other reason than showing how much technical argon they possess_) , it got plenty of support, and the message was removed.

That shows what kind of arrogant persons are behind this.

When your community demands something for 4 years and you (Docker) close your eyes it shows that you're not looking in the same direction as them :/

And now docker gave up and sold out.
Because they could not listen... they lost.

Shame - but hey ho.

It's a real shame that something like this doesn't exist. I would've loved to have been able to create onFailure hooks, which could take place when the health checks fail.

i.e.

services:
  app:
    image: myapp:latest
    hooks:
      onFailure:
        - # Call a monitoring service (from the host machine) to tell it that the service is offline.

This would be useful for times where the application doesn't bind to a socket/port. Kubernetes is probably the way to go, here, but this is a fairly large infrastructure change and overkill for a very small environment.

Edit:
To get around this, I ended up updating the entrypoint of my container to "wrap" the monitoring functionality. i.e.

# /app/bin/run_with_monitor
#!/bin/bash
set -eE

updateMonitoringSystem() {
 # do something here... This is run from the container, though, unfortunately.
 if [[ $? -eq 1 ]]; then
  # Failed!
 else
  # All is good!
 fi
}

trap 'updateMonitoringSystem' EXIT

$@
# Dockerfile
....
CMD ["/app/bin/run_with_monitor", "./my-app"

Still, it'd be nice to do this _without_ having to modify the image.

:man_shrugging: Came looking for this basic functionality, that the competitor (Kubernetes) has, and instead I found a dumpster fire.

It's a real shame, now I have to maintain separate docker images for testing locally.

Happy new year :roll_eyes:

image

@LukeStonehm same here. Needed to do ONE command after the container was stood up but instead was treated with hot garbage. I really don't feel like managing my own images and docker files when an official image gets me 90% or more of the way there.

A significant amount of programs rely on certain services to exist on startup. For example a MySQL or MongoDB database.

Therefore there is no sane way to use docker-compose in these cases.

Instead users are expected to:

  • Learn how to write Dockerfiles (and programming)
  • Learn how to build Docker images
  • Create Dockerfiles inheriting from the original images, adding code to make sure the containers wait for each other
  • Regularly check for security updates of the base images
  • Regularly modify the Dockerfiles to apply the updates
  • Regularly build Docker images from those Dockerfiles

And this sucks because:

  • You waste massive amounts of time learning stuff you may not even need otherwise
  • You regularly waste hardware resources on building and storing Docker images yourself or even on uploading/downloading (pulling/pushing) them
  • You regularly waste time on writing those Dockerfiles, building them, testing them, fixing them etc...
  • You potentially compromise the security of your images because you don't know what you are doing
  • You lose the ability to run only officially verified/signed Docker images

If we had a startup check, all of this wouldn't be necessary and we could simply change image: mysql:8.0.18 to image: mysql:8.0.19 whenever we want and be done!

Realistically this is what's currently happening in the real world:

  • People create their own Dockerfiles making changes so that they work with docker-compose
  • They build their images once
  • And don't patch them regularly
  • Hackers get happy

And you can't say that docker-compose is only supposed "to do one thing" because it already does pretty much everything. Including pulling and building images even more importantly, specifying dependencies using the depends_on property. This is not even about implementing a completely new feature this is just about passing another parameter through to docker.

@binman-docker @crosbymichael @dmcgowan @ebriney @ehazlett @eunomie @guillaumerose @jeanlaurent @justincormack @lorenrh @manishtomar @olegburov @routelastresort @spencerhcheng @StefanScherer @thaJeztah @tonistiigi @ulyssessouza @aiordache @chris-crone @ndeloof
Please reconsider this feature or let's at least have a proper discussion about this.

The task service technique works pretty well for me at this juncture, but does have it's idiosyncracies. We've applied the pattern in our compose files for migrations and application initialization extensively. but I do a agree the a better 'depends_on' that waited on a successful healthcheck or successful exit/task completion would make many tasks easier and more reliable.

This would really be a helpful addition.

I think it's worth emphasizing that Kubernetes has this functionality through lifecycle postStart.

k8s != docker-compose. Wrong channel

Sorry for not being clear, but my point was: Kubernetes supports this, and because Kubernetes and Docker compose have many of the same use cases/purposes, that would be an argument for having it in compose. Sorry if I was unclear.

Good news!!

I think docker has heard us, (on this issue and a few others). https://www.docker.com/blog/announcing-the-compose-specification/

Let's try to work on the specification there to fulfill the community needs. We can try to make this an open and friendly community with this restart.

Good news!!

I think docker has heard us, (on this issue and a few others). https://www.docker.com/blog/announcing-the-compose-specification/

Let's try to work on the specification there to fulfill the community needs. We can try to make this an open and friendly community with this restart.

Has anyone suggested this change yet? Mailing list isn't available yet so I think the next best place is here: https://github.com/compose-spec/compose-spec

I don't see an issue that describes this problem but not sure if that's the right place...

Edit: I opened an issue at https://github.com/compose-spec/compose-spec/issues/84. Please upvote it to show your support for the feature!

You can use the HEALTHCHECK to do something else like following example:

Code

Dockerfile

FROM ubuntu

COPY healthcheck.sh /healthcheck.sh
RUN chmod a+x /healthcheck.sh

HEALTHCHECK --interval=5s CMD /healthcheck.sh

CMD bash -c 'set -x; set +e; while true; do cat /test.txt; sleep 3; done'

healthcheck.sh

#/usr/bin/env bash

set -e

FIRST_READY_STATUS_FLAG='/tmp/.FIRST_READY_STATUS_FLAG'

# Health check

echo 'Run command to validate the container status HERE'

# On success
if [ ! -f "${FIRST_READY_STATUS_FLAG}" ]; then
  # On first success...
  touch "${FIRST_READY_STATUS_FLAG}"

  # Run ON_RUN on first health check ok
  if [ ! -z "${DOCKER_ON_RUN}" ]; then
    eval "${DOCKER_ON_RUN}"
  fi
fi
  1. Run the _health check_.

    • If it fails, exits from script with exit code 1.

    • If the _health check_ is ok, the script will continue.

  2. If it is the first _health check OK_ and if DOCKER_ON_RUN environment variable exists, execute it.

Example

docker-compose.yml

version: "3.7"

services:
  test:
    build:
      context: .
    image: test/on-run
    environment:
      DOCKER_ON_RUN: echo x >> /test.txt

You can use DOCKER_ON_RUN environment variable to pass a custom command to execute after run.

Execution result

docker-compose build
docker-compose up

Output:

Creating network "tmp_default" with the default driver
Creating tmp_test_1 ... done
Attaching to tmp_test_1
test_1  | + set +e
test_1  | + true
test_1  | + cat /test.txt
test_1  | cat: /test.txt: No such file or directory
test_1  | + sleep 3
test_1  | + true
test_1  | + cat /test.txt
test_1  | cat: /test.txt: No such file or directory
test_1  | + sleep 3
test_1  | + true
test_1  | + cat /test.txt
test_1  | x
test_1  | + sleep 3
test_1  | + true
test_1  | + cat /test.txt
test_1  | x
test_1  | + sleep 3
test_1  | + true
test_1  | + cat /test.txt
test_1  | x
test_1  | + sleep 3
  • You can see the error cat: /test.txt: No such file or directory until the _health check_ is ready.
  • You can see only one x inside /test.txt after run.

Hope this can help someone.

Edit 1

If you don't need a _health check_, you can use the rest of the script.

@reduardo7
Thanks for your workaround.
Just want to add, in case if your needs to run command one, like for users creation or etc, your can mount volume for the touch "${FIRST_READY_STATUS_FLAG}"

Many of these solutions are valid workarounds to this problem. For e.g. making an entrypoint script could also resolve this:
ENTRYPOINT ["./entrypoint.sh"]

which will include a more complex logic before running the actual service or process.
This is still not a hook though that would allow us to inject logic in the container lifecycle:

  • before creating
  • before starting
  • after starting
  • before destroying
  • even after destroying
  • etc ...

I know that not all the above are meaningful but I hope that you get the picture because this is the point.
This could also be included in docker-compose with a directive like:

lifecycle:
    before_start: "./beforeStartHook.sh"
    after_destroy: "./afterDestroyHook.sh"

or even like that:

hooks:
    before_destroy: "./beforeDestroyHook.sh"
    before_create: "./fixFsRights.sh"

I am unable to overwrite file which requires root permission using hook script or bootstrap script approach, since we start container as non root user

Wow, such a basic functionality and still not implemented.

Was this page helpful?
0 / 5 - 0 ratings