I think it would be useful if compose could allow you to specify containers that may already exist, eg:
shareddata:
container_name: shareddata
allow_existing: true
image: some/data_only_container
If the container 'shareddata' did not exist, it would be created as usual.
If, however, that container already existed, the allow_existing: true setting would not complain about a duplicate container, instead just skipping creation (and perhaps it would try to bring the container up if it were stopped?).
I haven't python-ed in a long time, but I might be able to create a PR for this feature if someone wanted to give me a little guidance on where the best place to start looking into the code would be.
Compose already does this if it created the container. Why would you need to create the container outside of compose?
@dnephin I'm trying to share some containers between microservices. So I have multiple docker-compose.yml files in multiple projects. I would love for them to use existing containers that matched up (name, image) but they don't. There's a failure if I try to do that. (Unless something has changed very recently and I missed it.)
http://docs.docker.com/compose/yml/#external-links was added to support that idea.
I think either a container is managed by compose (part of a service) and is linked with links
, or it's external
and linked to with external_links
, but it should never be somewhere in the middle.
There is also #318
@dnephin I can use external_links to link to the containers, but I currently have to set those containers up in a shell script that I run before compose because I want the containers shared across services.
I suppose #318 could solve the problem, but it requires a shared config file to live somewhere that all the projects know about.
Allowing for existing containers (and erroring if anything is different or conflicting) would actually be more straightforward for my use case.
If there are technical reasons that the distinction between links and external_links must be very clear, I think another way to implement this would be something like:
shareddata:
container_name: shareddata
external: true
image: some/data_only_container
Any 'external' container, if it does not exist, will be started with the given settings. Anyone referring to that container would need to use external_links. That would also solve my use case in a readable way.
I can see an argument for supporting external: true
as a way to replace external_links
(it would be more consistent with how we handle volumes and networks in compose format v2).
However, I think it would be a mistake to have compose attempt to start any containers that are marked external
. A service should be one or the other. If it's external it must be started externally, if it's internal compose will start or recreate it.
I have to say that I agree with @andyburke, and with the current knowledge that I have, I would be in favour of this feature.
We have multiple projects at our company, and one of those projects is shared between _all_ other projects. It only has to run _once_ to work for all projects, but ideally it would be included in the docker-compose.yml
of each individual project, and started if it's not running already.
This way, each project is self-contained, but _can_ use an already running instance of the shared project.
I don't understand why you'd want to share a single instance of a project with all other projects. Why wouldn't you want to start a new instance of it for each (like #318).
Having a service be "external unless it's not running" just doesn't make sense semantically. A service is either externally managed, or it's locally managed, it can't be both. I really think the missing feature here is what's described in #318. A way to include projects from the local one. So you are sharing configuration, but not running containers.
@dnephin for example, I'd like to run https://traefik.io, but only one instance of this router (obviously), but I don't want to have a separate start script to start this if it isn't running already. I want to add its dependency to all our projects docker-compose files, and only start the service if one isn't already running (with the given configuration).
Does that make sense?
If you're using it for a dev environment, I would run a single copy for every project, so it would just be a regular service in the Compose file. What's wrong with that setup?
Traefik is used to provide cross-service/project routing.
So we can do things like company.dev/project1
and company.dev/project2
. For that to work only one has to be running, and it has to be cross-service.
That's (one of) the use-case that would be nice to solve if Docker Compose had the option to start a service, unless it's already started some other way, as described above.
I have the same use case - the shared service is a load balancer. I'm using https://github.com/jwilder/nginx-proxy which is very similar to Traefik - there is one instance of the LB running on the host, and it automagically connects to all the containers, with container-level config (labels) specifying the hostnames. This works great and the only sticking point is that there isn't a good, semantic way to do it within Compose.
Running one instance of the load balancer per project is not a viable option because the load balancer needs to bind to port 80/443 of the host.
Activity on this issue seems to have stalled out, but as the issue is still listed as open I thought it appropriate to continue the discussion here.
I, too, would very much like this feature. My use case, similar to @thaeli, involves ngnix. I am using a docker container to use ngnix to reverse proxy connections made to a host with one IP address. This host runs multiple websites which themselves are powered by docker-compose or just docker. This is done so that multiple websites can all be accessible via port 80 at a single IP address.
My current workflow is to start the single ngnix container and then docker-compose up
all of the relevant services that all use the ngnix service. Most of the benefit of docker-compose is that everything can be spun up all at once with clear inter-service relationships defined. Needing to start a required service separately seems to very much contrast this advantageous design pattern.
It is not possible for each service to have its own instance of the external service because only one can be bound to port 80.
Is there an existing pattern to accomplish this using only docker-compose and no external scripting? If not, does this feature have a possibility of being added to the roadmap?
Thanks
As we start to move to a more distributed pattern (ie. swarm), wouldn't a flag something like create_if_missing
be beneficial? Then you could use said flag to create the container if it is not running and if it is just use said container. This would also allow you to build or pull if you would like. Another thing to consider would be that any down
, stop
, or rm
commands should leave these services, which could result in dangling services.
Does this go against any predefined standards or processes?
A possible use-case for this, is to ensure any shared docker networks are created.
Let's say I have 3 different compose files, A, B & C, which each have a few services within them. A&B are both referencing mynet1 and B&C are both referencing mynet2.
I need mynet1 to be created and running for the services, but I also need to be able to compose-up or down each of A, B & C separately.
Ideally, compose up of A or B would ensure that mynet1 was running and bring it up if not, etc.
I have the same use case with load balancers as @JeanMertz @thaeli @srwareham
Did you guys found any solution to this?
This would be very useful when ie. you want a single shared cache container across all of your apps. Or, you have one Postgres DB container with couple of schemas for your apps...
Echoing what others are saying. As @ajbisoft is saying, very useful for cache and database containers where you normally just want to have one running. At this point, we have to put all the containers settings under one file, but docker-compose will destroy and spin new containers when rerunning it, which is not what you want to do with cache and database.
Ideally, compose up of A or B would ensure that mynet1 was running and bring it up if not, etc.
This is what I'm envisioning for networks shared by things like jwilder's proxy too. I opted to create the network manually so docker-compose treats it as external and won't down
the network. I feel it is cleaner than having any sort of strange dependencies between unrelated stacks. Outside of docker-compose I did docker network create discovery
and then inside each compose stack added:
networks:
# Discovery is manually created to avoid forcing any order of docker-compose stack creation
discovery:
external: true
# each proxied app also gets
networks:
- discovery
# jwilder/nginx-proxy gets default since I have other apps in it's stack
networks:
- default
- discovery
Currently the docs say this about external:
If set to true, specifies that this network has been created outside of Compose. docker-compose up will not attempt to create it, and will raise an error if it doesn鈥檛 exist.
So the intention of external is obviously to not manage them...however I think what we're both @mobiuscog and I are thinking is there'd be some sort of external_create_only: true
option in doco, which creates them but doesn't attempt to manage / destroy them after the creation. A simple create and forget function so we don't worry about docker-compose down
throwing errors about 'in use network` all the time (which I used to see before switching to this outside docker-compose network style)
The alternative is: It isn't terrible hard to write a little up.sh shell script that runs the docker network create command before upping a stack but I really prefer to try to teach anyone else using my stacks the native docker-compose commands rather than obscure the awesome features they could be using.
Lead here by https://github.com/jwilder/nginx-proxy/issues/552
Having the exact same need than @JeanMertz (traefik usage in multiple projects)
I need something like this to use mailcatcher with all of my docker project. I don't want to boot it manually, I want my docker compose create or use it directly if the service is already running cause it's part of the project as developer.
Or maybe my approach is bad and you have some better practices for this case ? :)
Any idea ?
I don't want to boot it manually
@vschoener , If you're OK with some inelegance in boot order / error messages you can do this...You need one master project that defines the network as non external (and thus creates it), and the all your other projects can use it that network as external.
If that master project isn't up yet the other ones will not be able to find the external network however, creating an interproject dependency which a simple docker network create shared_mailcatcher
could replace, like I suggested above (my favored practice). Keeping the network fully external from all docker-compose projects also prevent docker-compose from trying to destroy that in use network every time you docker-compose down too, so there won't be an error.
A good README.md would say in the requirements to just run 'docker network create shared_servicename` or if you have bootstrap scripts in each project using the shared network those would do it for you.
@diginc Thanks for your advice :) I will take a look and try something according to your example :)
We had this problem with the nginx-proxy. The workaround I use for our Development Environment is putting the shared container in every service/docker-compose.yml file where it is needed with the same service and container name.
To avoid the container name conflict error this would normally give you, I created a folder structure in every project where the docker-compose.yml is in a folder named "container" because the name of the folder the docker-compose.yml file is used (indirectly) to determine naming conflicts. I think I read somewhere that the folder name is used as a prefix for the container name in some way.
Anyhow, it results in docker recreating the nginx-proxy container, which is fine for our purposes.
I am using a script to check for the network and proxy container before I start any docker-compose project: http://www.rent-a-hero.de/wp/2017/06/09/use-j-wilders-nginx-proxy-for-multiple-docker-compose-projects/
Here's how I've tackled the problem of sharing containers (compose configurations) across projects: https://github.com/wheniwork/harpoon
My use case is that I have 20+ different "apps", all of which could use any of a set of shared services/databases. I want my testers to be able to "docker-compose up" on the set of apps they care about at the moment. I would like each compose file to fully specify all dependencies of each app. The shared services must be single instance because my testers need to be able to complete a work flow between 4 to 5 apps while maintaining the state from the previous steps.
@burtonrodman the same for me. Some services, especially database and monitoring, are shared across apps and we should have more flexibility to manage them within or outside of compose.
In agreement with @burtonrodman @spaquet @ajbisoft @atedja @mobiuscog @davidbnk @JeanMertz @andyburke @thaeli and @midnightconman re: the need for this with a similar use case as described above.
My company has many microservices. We want developers to be able to stand up sets of microservices separately without spinning up the entire cluster on their development machine. Furthermore, we'd prefer not to spin up a separate database container for each microservice... we'd prefer to share a database instance between microservices.
With that said, our current solution is less than ideal when it comes to user experience. We essentially write a wrapper script around docker-compose
and duplicate it across all of our repositories. Something to the effect of:
create_docker_volume_if_not_exists $SHARED_VOLUME
create_docker_container_if_not_exists $SHARED_CONTAINER
docker-compose "$@"
Which is used by our developers with: ./scripts/docker-compose.sh up [...]
.
This is unfortunate. And sadly, this is just one of many minor issues with docker-compose that are all adding up to make it very frustrating to work with. It feels as if we're fighting docker-compose every step of the way. Maybe that means we're using it improperly and it's not meant for our use case or maybe it means there's room for improvement. I feel it's the latter.
If any core devs are willing to merge this, then please let me know what needs to be taken into consideration. With that information I will try to create a PR satisfying this use-case.
@denzel-morris, check out https://github.com/wheniwork/harpoon. It鈥檚 designed to assist with your particular use case.
I have the same problem. If I use only one nginx reverse proxy on host, it can't be defined in individual docker-compose files. That way I don't have explicitly defined services for individual apps and can't reproduce exact stack in development. (only manually)
+1 lack of this feature has caused a significant amount of extra work for us. Would really like to see this.
You can actually get this behavior as long as you ensure the following:
Then docker-compose will detect that the container already exists and reuse it, starting it if needed. For example:
$ docker-compose -p SOME_PROJECT -f shared-services.yaml up -d
shared_vault is up-to-date
shared_postgres is up-to-date
shared_dynamodb is up-to-date
shared_redis is up-to-date
You might want to make sure that your containers are on the same network using the networks
directive. Then they will be able to reach each other via their container_name
.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
bump
Sorry, but just because this is a 'stale' issue, it doesn't mean it would not be useful. I've had to build systems to avoid this issue, but I'd love to simplify and delete them if this feature were ever implemented.
If this is never going to happen, then just close this and state that, but I don't believe this should just be closed because it's "stale".
This issue has been automatically marked as not stale anymore due to the recent activity.
@andyburke a reasonable way to get this feature has been proposed https://github.com/docker/compose/issues/2075#issuecomment-382605262
stale
label mostly demonstrate low activity on this issue, so no obvious desire by maintainer to follow this direction, or community to request it being implemented.
We can keep it opened waiting for more feedback, but this bot has been introduce to ensure we can invest more time on actually active issues.
Does #2705 handle the 'down' condition though.. If using docker-compose to stop a service that also uses one of the shared services, it seems likely it will also stop that service, which isn't what would be wanted - the shared services should persist until they are no longer referenced by any other compositions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not recent activity during the stale period.
+1. Use case is a single postgresql database that must always be running for ANY of the other services to work. The services may all be upped and downed infrequently, but they ALL require that central postgresql service to be running. When I up a service, it should know that if postgresql is down - well, bring it up.
Seems an obvious use case.... is there a best-practice on this?
Most helpful comment
I have the same use case - the shared service is a load balancer. I'm using https://github.com/jwilder/nginx-proxy which is very similar to Traefik - there is one instance of the LB running on the host, and it automagically connects to all the containers, with container-level config (labels) specifying the hostnames. This works great and the only sticking point is that there isn't a good, semantic way to do it within Compose.
Running one instance of the load balancer per project is not a viable option because the load balancer needs to bind to port 80/443 of the host.