Compose: Proposal: docker-compose events

Created on 4 Jun 2015  Â·  68Comments  Â·  Source: docker/compose

There have been a few requests for supporting some form of "hooks" system within compose (#74, #1341).

A feature which runs commands on the host would add a lot of complexity to compose and the compose configuration. Another option is to support these features by providing a way for external tools to run the command triggered by an event.

docker events provides an event stream of all docker events, but would still require filtering.

docker-compose events could provide a similar interface by filtering the events returned by /events and return a stream of only the events related to the active compose project. The event stream could also include a new field "service": <service_name> in addition to the fields already provided by /events

kinfeature

Most helpful comment

Speaking as a maintainer, I welcome +1s as a way of showing demand for a particular feature - it helps us decide what to work on. It'd be nicer and less email-heavy to have a mechanism other than comments for doing so, but that's what we've got right now.

All 68 comments

Hm fancy!

If accompanied by some clear examples, this would be a great feature.

Given that compose doesn't have a service/daemon. How would this work? (just wondering). Also; will subscribers listen to events for all projects, or receive only events for a specific project?

I think it should be just the specific project, otherwise it's not really more useful than the existing /events endpoint from docker engine.

Given that compose doesn't have a service/daemon. How would this work?

I think it would work similar to docker-compose logs. Streaming from the docker daemon, and filter out events that aren't related to the project. To consume the events you would pipe stdout to another application.

Ah, yes. Makes sense. Count me interested :)

+1

+1

yeah actually, I prefer this concept to plain "allow to run scripts 'before/ after' nonsense"
+1

Example usage: OnExit (of my web server in a Docker container) run a script to close port 80 on the firewall.

I started to look into this, but I think to do it properly the docker remote api needs to support filtering by labels.

@dnephin with filtering, you mean filtering _events_ based on labels?

@thaJeztah Exactly, https://docs.docker.com/reference/commandline/events/ only supports filtering by image id, container id, or event type

@dnephin feature requests are welcome :+1: sounds like a nice feature to work on for contributors as well (clear goal)

Really in need of this feature
+1

:+1:

:+1:

+1

+1

+1

+1

:+1:

:+1:

+1

+1

+1

+1

+1

= 13 and five thumbs.</irony>

However I am just curious why people make +1 posts.
Is there any reason besides spamming?

Or are there indeed people who think "_oh, if only exactly 23 people would write +1 then I would publish my secret implementation of this feature_"?

I don't see any use in creating long issues full of +1s. Well, of course the thread gets bigger and bigger and developers who would like to work on these issues have to search and to scroll to see the posts that are actually constructive. So people adding +1s are actually making the process of contributing more difficult by worsen the general view.

I'm really sorry that I added even one more post here that isn't constructive to the issue and I hope I wasn't offensive in any way.

Speaking as a maintainer, I welcome +1s as a way of showing demand for a particular feature - it helps us decide what to work on. It'd be nicer and less email-heavy to have a mechanism other than comments for doing so, but that's what we've got right now.

Okay, in that case thank you for your insight @aanand. :wink:

But I'm still not sure about this.
If one issue has, let's say, 42 and another 52 it's still not easily visible were the demand is higher without counting it (manually or with an extension as long github doesn't support "native" +1s).

Especially in the case of issues +1s aren't helping much (which is why I personally agree with the guidelines from Docker).

Perhaps it's just a matter of taste. :wink:

+2

Events are being added in #2392

If you're interested in trying out that branch, feedback is always appreciated.

:+1:

I've been playing around with these events for a few days and here my impressions:

My use case is setting up an Apache Cassandra schema after my cassandra containers start. Since there is no way to load a schema until after cassandra starts (like during docker build or as a start script wrapper), I have to fire off a docker exec with a script to load the schema. I'm also doing the same with topologies for Apache Storm which also requires things to be load only after the software has started. I'd love to have a Dockerfile that just built me an image with the schemas and topologies pre-loaded, but that isn't possible.

I used the event script that was posted in the pull request - it works well, but it has to be running in the background or in another terminal obviously for it to work. And it means there needs to be another piece of software on your system rather than just docker in order to get your cluster up and running. I'm not sure this is actually better than some of the suggest other solutions, like having post-start commands in the docker compose file itself. That would be nicely self-contained and so awesome, but I guess it doesn't fit with the docker paradigm of keeping things really simple.

One thing I'm going to try next is having a container with my event-listening script in it to do the necessary work. (hopefully docker-compose runs in a container volume-mapped to the host's sock file, ... right?) The only issue with that is the container would have to be started before docker-compose up was called and couldn't be part of the compose app (since there are no guarantees of start order and the events-listening-container would miss events if it started after the other containers

I'd love to have a Dockerfile that just built me an image with the schemas and topologies pre-loaded, but that isn't possible.

It should be possible, and it's what I would suggest as well. What you need is a build step that does something like this:

  1. Start the service (cassandra) in the background
  2. Wait for it to be available
  3. Load schemas and fixture data from files
  4. Gracefully shutdown the service running in the background

I've had a lot of success doing that for mysql, postgres, and elasticsearch. It should be possible to do with other data storage services as well.

The only issue with that is the container would have to be started before docker-compose up was called and couldn't be part of the compose app

You might be able to include it as part of the compose file if you use depends_on to have everything else depend on it (or even just the "first" service depend on it).

+1

+1

+1

+1

Start the service (cassandra) in the background
Wait for it to be available
Load schemas and fixture data from files
Gracefully shutdown the service running in the background

Does anyone have an example off how to do this? schema initialization in cassandra would be great😀

Same question: I have mysql container and I need to load dump to it before I can proceed

I have an example of this with postgresql: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails

I've done the very same thing with both mysql and elasticsearch. I would like to work on an example using cassandra sometime.

If you have any questions about this example, please feel free to open an issue

Ok, at the moment you can start and provision your container as usual from a twisted bash script, and then add external_links section to your docker-compose.yml
Or you can create the other container that depends on mysql container and runs all your db intitalization code. Which is quite ugly though.

@alikor I don't have an example that creates Cassandra keyspace/table structures at container image build time, but I do have an example which starts a Cassandra container and then creates a keyspace. The hard part, of course, is to know when the Cassandra node within in the container is ready to handle a CREATE KEYSPACE, but as it turns out, I don't actually need to know:

echo "Starting Cassandra container"
docker run --name cassandra --detach cassandra:latest

echo "Trying to create Cassandra keyspace:"
until docker run --link cassandra --rm cassandra:latest \
    sh -c 'exec cqlsh -e "CREATE KEYSPACE foo WITH replication = {'"'"'class'"'"': '"'"'SimpleStrategy'"'"', '"'"'replication_factor'"'"': '"'"'1'"'"'} AND durable_writes = true;"'
do
    echo "Trying again to create Cassandra keyspace:"
    sleep 2
done
echo "Created keyspace."

As you can see, I don't need any events from Docker or Compose, I can simply try brute-force, and the moment C* is ready, my query succeeds, and I can now also start other containers which use the C* container, resting assured that it is ready to be used.

You might want to extend this a bit, maybe adding some "try it 10 times and then give up" logic, but this approach is used within our production rollout pipeline for several weeks now and as far as I can see so far didn't fail even once.

@alikor @manuelkiessling I've coded something similar in the JHispster project.
See the scripts here and documentation

The scripts run in a second container:

  • the auto-migrate.sh script ping the Cassandra container until it is ready.
  • then all the cql scripts (create schema, tables, etc) not already executed are executed by the execute-cql.sh script.

That's similar to what tools like FlywayDB/Liquibase offer for a SQL database.

I have to put my +1 and a donation of 2c....

I find with many containers in docker compose on ubuntu, the linux connection tracking can get in the way.
After restarting containers with down then up, the IP addresses may not be exactly the same, and connection tracking tables in the kernel get confused. (This isn't a problem with tcp port forwards, only UDP)

Thus, on-start or on-pre-start to execute 'conntrack -F' for me is a must.

For now, to ensure OPS get this right, I have to provide a start script, and ask them to avoid running docker compose directly.

The biggest value of compose is that it is self-contained, if I need to run other tools to setup a deployment, I might as well use a tool that covers everything.

I agree that a "post-run" mechanism for running provisioning steps would be amazing and would solve a great deal of deployment issues. While it's nice to say "Just build it into your dockerfile." what if I didn't write the dockerfile? What if I'm using a provided container with a set entrypoint and I don't want to have to edit or wrap the upstream dockerfile? The ability to arbitrarily fire off commands post-entrypoint seems like a basic piece of functionality to me.

I must admit, I get a little tired of seeing threads like this on github where a whole host of users are telling the developers how useful a basic feature would be only to be met with "do it this way instead." We know our use cases, we understand our needs, and we're pleading with you to provide a simple and highly sought after piece of functionality to simplify our deployments.

Why is there so much resistance to this? I get that just because the feature can be considered simple that the implementation of it may not be simple, but come on man. This is something that a great deal docker-compose users obviously have a real need for.

This has already been solved and closed years ago.There isn't resistance, there's always a more clever better way to do what you're thinking of doing.

Also please don't tell the maintainers of a project or repo that you're sick of seeing requests for a simple solution. If it was so simple you should be able to do it yourself. To extend that note not every suggestion or feature even fits any project, it may be a singular need that can and should be solved other ways and breaks configuration or conformity or any other myriad reasons that aren't even specifically technical in nature.

You could also just write a bash script with spawn and expect if it's that big of a deal, but I still feel like you'd be doing something wrong.

Remember containers are not your VM...

@relicmelex You need to go through #1809 to get a better idea of what is going on.

@relicmelex I understand all of that, and I get that a feature that seems simple may, in fact, be very complicated to implement, and may not fit a project, but I commonly see developers arguing against something that dozens and dozens of users are requesting for nebulous reasons. I apologize if I came off as demanding, it is not my intention to make demands of busy developers, though I did intend to express my frustrations about a trend I see among some of the tools I consume.

What is the solution? Because I'm still looking for it. If you could point me in the direction of a best practices way to do this, it would be greatly appreciated; maybe it's someplace obvious, but I haven't come across it yet. I have a whole bunch of stuff to build, and guess what, I expect the tools I consume to be able to handle some of these features without taking the time to implement it myself since they often have whole development teams committed to them while I'm here on my own just doing the best I can with what little time I have.

If using spawn and expect is doing something wrong, what is the _right_ way to run an arbitrary command on a container after it's running? I'm absolutely amenable to using whatever the correct solution is, if it already exists; it may be that my frustration is simply a lack of google-fu skills (google searches led me to issue #1809 which in turn lead me here) or because I'm not reading some section of important documentation somewhere. I'd definitely appreciate any help you can provide since you seem to be aware of the solution. As I gain a better understanding of these tools, I'm thinking I just need to wrap the source docker container in a dockerfile that includes the final provisioning steps at build time; does that sound correct? If so, I may have been being silly to get so frustrated in the first place.

@TalosThoren Can you try and lay out what you're trying to accomplish as an end objective and then the steps you're currently taking? Because usually you can just write a script to execute as a step in the container. Maybe as part of the independent Dockerfile(s), or a bash script to run after build... maybe mount the volume on start-up and have it run a script as the CMD option? Lets explore.

@omeid I've been through all of that, I stick by what I said... Notice it's been over two years since my original post here about it as this issue came up for me in a different annoying way. And instead of breaking pattern I started using docker-compose in a more structured way and linked some containers to achieve what I was trying to do. It ( whatever it is ) can be done without that feature, I'm sure of it.

Side note... @systemmonkey42 you may want to use env vars in docker-compose and the hostname of the container if they are linked is the name of the container in the docker-compose file. Maybe that will solve your cross container issues?

@relicmelex Every feature that compose has can be done without it. The argument that you can hack your way without any feature is pointless. I still think that #1809 was closed unreasonably, @dnephin really wants to promote his tool, dopey or whatever it is.

And on the original issue, I will just iterate my quesiton, feel free to answer it @relicmelex.

@dnephin Do you think running init scripts is outside the scope of container based application deployments?
after all, compose is about "define and run multi-container applications with Docker".

@omeid You're correct compose can be done without, and docker can be done without even computers can be done without... I think you missed my point. I never suggested any hack of any kind, I'm suggesting you use the correct tool for the job.

Instead of antagonizing and talking about the problem, try to find a solution. This is just pointless banter now.

@relicmelex Thanks for following up. My use case, in this instance, is to simply create a table in a cratedb database upon initial deployment for use with the crate_adapter for persisting prometheus metrics. The cratedb service needs to be running already, and I'm pretty sure the nature of cratedb means I only need to do it on the first container to stand up in the cluster. The intention is to write a script that checks if a table exists, after allowing some time for the container to join the cluster using its built in service discovery, and create the table if it does not exist.

I may be able to check if the container has been elected a master as the sentinel, or as an additional sentinel for table creation, as well, but I haven't got that far yet, I'm mainly doing manual lab work to ensure I understand the deployment steps presently. I will have to write a dockerfile for the crate_adapter, as they don't presently supply a docker image, but that will be simple. I actually wonder if it would be appropriate to install the crash command line tool on the crate_adapter container and have it handle creation of the table upon connecting to the db, but that seems like it might introduce some dumb problems.

I've run into many situations where running an init script of some kind after deployment of a container would be desirable, as well. I think I agree with @omeid that this clearly falls within scope of container deployment and orchestration, but I also see your point that there are probably best-practices ways to implement this kind of thing without incorporating a "run-after" or some such capacity in docker-compose.

I think I see both sides of this argument, and I know which one I lean towards, though I may begin to feel differently once I've learned more about implementing this kind of build.

@TalosThoren Thanks for being so polite, you make me want to help you.

I imagine you also want to check to see if that table is already available so you don't accidentally destroy data or just have a failed step? Then create the table, then maybe even seed some data? ( Say it's a credentials table and you need a 'system' type credential so you can always log into a platform )

I'm doing this right now with dynamodb-local & elastic search then hooking services to them in a docker-compose environment, so I'm certain it can be done.

My approach is to create multiple docker containers and point to those in my docker-compose instead of just the default docker container. It takes a little more work but it really allows you to customize your environment and it's ability to communicate across containers.

docker-compose

  • elastic_search
  • dynamodb-local
  • auth_service ( custom Dockerfile )

    • link: dynamodb-local

  • resources_service ( custom Dockerfile )

    • link: elastic_search

  • gateway ( custom Dockerfile )

    • link: auth_service & resources_service

In the custom Dockerfiles I use the normal Dockerfile parameters to run the commands to build the environment I'm looking to build, then insert / build the db as one of the steps.

If this gets unruly I then turn it into a bash script or many bash scripts with dedicated purposes so that the build can use cacheing if you want to make smaller changes.

Thanks again @relicmelex, I'll have to think through that, and I may come back with some questions, but that approach gives me a lot to think about. I really appreciate you sharing your expertise with this.

@relicmelex I wanted to follow up and let you and anyone else who google-search stumbles their way upon this my results.

Using your method it proved trivial to create a short-lived container that simply runs a bash script to perform the necessary bootstrapping operations.

I simply wrote a script that awaits availability of the containerized service (which happens to be a database) that I need to run bootstrapping operations against before querying the database for the table in question. It logs what it finds, creates what it needs to if it's missing, and exits gracefully.

Thanks again for assisting in a long closed issue, it took some outside perspective to get a better grasp on how I should be thinking about containerized code execution.

@TalosThoren You could come up with a 100 kind of hacks to implement this feature, but a hack is still a hack, you have to explain it to people who use your project instead of expecting it as part of understanding Docker Compose. That is the major difference.

When I use docker-compose, I expect my colleagues and collaborators to know or learn Compose, and docker-compose is well documented, this means I don't have to document my hack on every project, nor use some promoware like @dnephin's dopy or whatever it is, that may or may not be documented properly and could be gone any moment without much of a community to keep track of it.

You could argue against every single feature, and up to the entirety of docker compose with use a bash script, and that is as meaningful to the conversation as mentioning the colour of my socks— not much at all.

@omeid Just because you don't understand it doesn't make a hack... end of conversation.

@relicmelex That is a childish reply. I have deployed a very similar hack multiple times and that is the exact reason why I need the ON START feature.

@omeid, hey man, I'm on your side. I think this needs to be a feature in the docker-compose files, but @relicmelex gave me a solution that I think is quite robust and that will serve me well into the future as I implement work I need done today. I can't wait around for the development team to decide they want to implement something that I'm happy about, I got stuff to build.

I'm not convinced this closed thread is the right place to get the development team's attention regarding this feature request, so I don't think it's very productive to continue to argue for it in this particular thread, even though I agree that post-service-launch provisioning should probably be a thing docker-compose supports. I'm less convinced it's critical to prioritize it, though, than I was at the beginning of this conversation, but I still think it's a long overdue feature that has been summarily dismissed for poorly argued reasons.

I absolutely agree with your sentiment that "use a bash script" is a bit of a cop-out argument. The fact of the matter is that should we see support for post-service-launch provisioning find it's way into docker-compose, we'll be supplying bash scripts as the provisioners anyway. It could be said that we're simply asking for a more built-in way to deliver and execute those bash scripts in this thread. I definitely consider what I ended up implementing a workaround for missing functionality, but it works well, and it's a solid standard for the time-being.

+1

+1

what about taking advantage of an alias. Still hackish, but solves the issue now

add an alias like this
alias docker-compose='docker-compose-hooked'

place this script in your path somewhere and make it executable (chmod 755 docker-compose-hooked)
docker-compose-hooked

#!/bin/bash

if [ -f .docker-compose-pre ]
then
    #Example command
    sh .docker-compose-pre
fi

docker-compose $@

You can then do a normal docker-compose build and it will copy your ssh key first.
This would check if you have a file called .docker-compose-pre in the same directory as your docker-compose-yaml file (Really just current directory) and run it before calling the real docker-compose

what about taking advantage of an alias. Still hackish, but solves the issue now

add an alias like this
alias docker-compose='docker-compose-hooked'

place this script in your path somewhere and make it executable (chmod 755 docker-compose-hooked)
docker-compose-hooked

#!/bin/bash

if [ -f .docker-compose-pre ]
then
    #Example command
    sh .docker-compose-pre
fi

docker-compose $@

You can then do a normal docker-compose build and it will copy your ssh key first.
This would check if you have a file called .docker-compose-pre in the same directory as your docker-compose-yaml file (Really just current directory) and run it before calling the real docker-compose

In my particular case, something like this would work.

But I must point out that, at least for me, the whole idea in having a hook inside the Docker Compose file is to precisely avoid another step that every developer in my team would need to take.

Let's assume I create this alias and my problem is solved. Then my developer doesn't follow along and I'm back to square one.

If I would be able to add a hook inside docker-compose.override.yml and commit it to my Git repository, that pretty much solves the issue and I'd never have to second guess whether my teams comply with a step-by-step set up your development environment...

Anyhow, that is my motivation to adding a plea for this feature. I also need to run stuff _on the host machine_ before/after docker-compose runs.

From @TalosThoren https://github.com/docker/compose/issues/1510#issuecomment-352272123 above:

Using your method it proved trivial to create a short-lived container that simply runs a bash script to perform the necessary bootstrapping operations.

I found this good enough for my use case, just leaving it here as it didn't found an immediate example. I needed to setup an initial solr directory with a specific config schema for an older solr image that needed a mount, so this is what I ended up doing:

version: "3"

services:
  setup:
    image: alpine:latest
    volumes:
      - ./:/mnt/setup
    command: >
      ash -c "mkdir -p /mnt/setup/.local/solr/data &&
               cp -R /mnt/setup/sites/all/modules/search_api_solr/solr-conf/3.x /mnt/setup/.local/solr/conf"
  solr:
    image: geerlingguy/solr:3.6.2
    depends_on:
      - setup
    ports:
      - "8900:8983"
    restart: always
    volumes:
      - ./.local/solr:/opt/solr/example/solr:cached
    command: >
      bash -c "cd /opt/solr/example &&
               java -jar start.jar"

@hanoii Uuuhhh!!! I love that. Thank you :)

Was this page helpful?
0 / 5 - 0 ratings