I'd like to use fig for acceptance testing of services
In the case of a single service with a single level of dependencies this is easy (ex: a webapp with a database). As soon as the dependency tree grows to depth > 1, it gets more complicated (ex: service-a requires service-b which requires a database). Currently I'd have to specify the service dependency tree in each fig.yml
. This is not ideal because as the tree grows, we end up with a lot of duplication, and having to update a few different fig.yml
's in different projects when a service adds/changes a dependency is not great.
Support an include
section in the fig.yml
which contains url/paths to other fig.yml
files. These files would be included in a Project
so that the top-level fig.yaml
can refer to them using <proejct>_<service>
.
include:
# link using http
servicea:
http: http://localhost:8080/projects/servicea/fig.yml
# local paths
serviceb:
path: ../projects/serviceb/fig.yml
webapp:
...
links:
- servicea_webapp
- serviceb_webapp
I'm looking to implement this features myself, but I would like to upstream it eventually, so I'm interested to see how you feel about this idea.
I've looked over the code, and it seems to be relatively easy to make this work.
Some unresolved issues are:
image
should be easy enough, but services with build
would need to ensure that the naming remains consistent, and that the service was already built (and possibly pushed to a registry). See next two issuesproject_name
key to the config to keep these consistent (this is not really critical)For fetching the external configs. I was going to use requests
since it's already a dependency, but I was also considering supporting git as another method, possibly using dulwich
I have a working prototype with a single integration test in https://github.com/dnephin/fig/compare/include_remote_config
It still needs some cleanup and more tests, but I thought I'd link to the current progress.
Nice!
@dnephin @bfirsh +1 for the new feature.
I'm looking for a way to reuse fig.yml
from another project,
before something like include
is available,
I'll have to fallback to using copy/paste/edit ;(
That is awesome! I'd love to get this feature as well. Any updates on merging this on master @dnephin ?
Cool, glad you are interested! I haven't had much time to finish this off yet. I still plan on doing it, but it might still be a week or two out. I've been thinking it actually probably requires #457 to deal with including any services that use build
instead of image
(because you wont be able to build them without the whole project).
The idea would be to use one of the tags as the image, and assume the other project has properly pushed the images and they are available to be pulled (or already cached locally).
Nice!
For a remote or standalone fig.yml file your solution of using tags seems pretty reasonable.
What are your thoughts if the fig.yml file is available locally inside its own project (meaning everything would be there)? Could the build be triggered? It seems to make sense.
I haven't thought much about using it with locally available fig.yml
. In that kind of setup would you have different services in subdirectories and you want to include them so that each individual one is smaller?
I think defaulting to use the build
if no tags are provided is probably reasonable. It might need to do something with relative paths (append the path given in the include
section to any build paths in the included fig.yml
).
https://github.com/dnephin/fig/compare/include_remote_config is mostly ready. There are two TODOs to resolve:
build
services to use the first tag
hdfs
and s3
urls for remotes (currently supports http/https, and local files)+1 - absolutely nescessary feature
It should be possible, to extend the imports. For instance a service included should get 'volumes_from' a data volume container.
Thus you can configure smaller orchestrations close to the services (e.g. gitlabService=gitlab+redis+postgres) and combine those smaller orchestration to a bigger one (whole orchestration = gitlabService + liferayService + jenkinsService). In the final fig.yml you can connect the imports to each other.
+1 for more reusable configuration
This would be extremely useful for my team and I :+1:
:+1: for include
+1
extremely useful +1 as well over here.
Playing with this, it seems to solve the initial problem, but not the exact problem I'm facing. My issue is wanting to fire up app X then at a later time link app Y (w/ it's own fig) to it. With this proposed solution, the second fig up will bring down the first app and restart it.
That fits with the initial problem, but ideally this could check to see if a set of containers from the included fig.yml was already up. In that case it could just do the link, /etc/hosts and ENV vars changes to link to the already up containers.
@rseymour Thanks for trying it out!
I wonder if fig up --no-recreate
will accomplish this?
:trophy: --no-recreate totally worked @dnephin !
PR is up at #758 is anyone is interesting in giving some feedback
@dnephin Reviewed this patch looks pretty solid. It needs a rebase, which will probably be some work. There are a couple comments in the review. I have also asked a good friend who is a python wizard to have a look over it @zaneb.
it seems like this proposal may be what I was thinking in https://github.com/docker/compose/issues/1647
I agree, I think they're trying to solve the same problem
I've implemented this as an external pre-processing tool at https://github.com/dnephin/compose-addons#dcao-include
If anyone is interested, please check out the docs. Any feedback can be contributed by opening an issue on that repo (and would be appreciated).
@dnephin I'd like to give a try to your branch but I cannot seem to find it here:
https://github.com/dnephin/compose/branches
unfortunately I cannot switch to a different tool to orchastrate my containers so If you let me know how can I use your modifs I can give it a try and get you some feedback.
The branch is so old now I don't think it will merge.
I would suggest giving https://github.com/dnephin/compose-addons#dcao-include a try. It's a pre-processing step, but you still use docker-compose
for the orchestration. I've used compose-addons
for a few repos and it's worked for me.
If the issue is a lack of a binary install, please do open an issue on the github for that project and I can help you out.
Ok. Let me see if I can give it a try ;)
Did anything ever happen with this? If not, will something happen? We are having to do some ugly things with multiple "-f" and relative path stuff.
I build some things external to Compose in https://github.com/dnephin/compose-addons that act as pre/post processors for a Compose file. They probably need to be updated for the v2 format.
+1
@dnephin I like the idea of dcao include, do you think there is any chance to get it a part of compose itself?
We have not made any progress integrating the ideas into Compose
Is there any way we can help doing so? or shall we use the add-ons?
Simplified example of a situation I've run into at multiple companies that I wish docker-compose could support and could with includes:
my_top_level_project_dir/docker-compose.yml
db/docker-compose.yml
middle-tier/docker-compose.yml
front-end/docker-compose.yml
another-front-end/docker-compose.yml
Each team/developer responsible for an area should be able to change to their appropriate subdir and type docker-compose up
and be done with it. The middle-tier would include the db layer, and each front end would include the middle tier definitions. The top-level directory would start everything, by including each of the front-ends. We have a more complex setup with more components, but hopefully the idea gets across.
In this way each team can work [far] more efficiently by only using/defining the components they actually need. By allowing linking/depends_on for includes you can have a DRY system where each component can be effectively managed/versioned without the headache of managing multiple files, or multiple -f
options on the command line.
+1 for Include
Any news about this?
I really like the way the syntax works with dcao-include
. Sadly it does not work with V3
.
Has anyone suggested just using the c-preprocessor parse a yml file just like a c.header file? It would allow simple (left-justified) includes. The ability to use it's macro capabilities might be useful too.
Also looking for a solution to use external docker-compose.yml by http/s url.
Hello!
Any updates on this?
This feature would really help us since we use a lot docker-compose -f docker-compose-common.yml -f docker-compose-local.yml
and it sucks to write it down all the time.
As @dosmanak mentioned. This would be very appreciated feature.
This is easily fixed by creating single-line shell scripts.
@Papipo: Everything can be fixed by shell scripts.
We have Windows and Linux developer machines so we have to maintain two different scripts for each project. Instead of including one simple line in the docker-compose file.
@ludekvodicka that's a good point :)
I also want this for dependency reasons. I want to be able to have a service depend on a recursive service compound.
I have a reverse proxy that needs to be started after a few others, but not necessarily at last. All the services it depends on are defined in docker-compose files in subdirectories. It would be cool if I could do it like this:
services:
reverse-proxy:
#more fields here
depends_on:
- logger
- ./my-recusive-service:service1
logger:
#more here
So when depends_on
gets a path, it waits for service1
defined in ./my-recursive-service/docker-compose.yaml
. This service1
may have other dependencies that get started first (e.g. mysql in case of wordpress)
When I have time I could try to implement that
@jvanbruegge Be aware that depends_on
doe's not wait for mysql
to be "ready" thus should not be used in order to control startup order.
Having to type out
docker-compose -f docker-compose-common.yml -f docker-compose-local.yml <some command>
500 times a day is just the worst....
Why in the world is this _still_ not implemented? ("include" is fundamentally different from yaml merges/anchors, so those are not a substitute)
Here is my workaround, in case anyone finds this useful, or can suggest a better way. I have a top level docker-compose.yml
file with the base config, that is good for local development and deployment. I create subdirectories for the other deployment types, testing, prog, etc. E.g. I have a digitalocean
type to deploy on a DO host. The digitalocean
directory has a docker-compose-custom.yml
file which contains the overrides I need:
+- docker-compose.yml
+- digitalocean
+- docker-compose-custom.yml
E.g. digitalocean/docker-compose-custom.yml
has
version: '3.3'
services:
frontend:
ports:
- "80:3000"
environment:
- RHUB_BUILDER_EXTERNAL_URL=http://...
to specify the IP of the DO host, and port forwarding to the container.
I wrote a simple script called dc
that wraps docker-compose
and sets up the COMPOSE_FILE
environment variable based on the current working directory. I use dc
instead of docker-compose
everywhere. E.g. if dc
is invoked from within digitalocean
, then it makes sure that both the base docker-compose.yml
and the docker-compose-custom.yml
file are used.
Here is the dc
script:
#! /bin/bash
here=$(basename $(pwd))
# TODO: could do better here, e.g. find the root automagically
if [ -f docker-compose.yml ]; then
root="."
else
root=".."
fi
export COMPOSE_FILE="docker-compose.yml"
if [ -f docker-compose-custom.yml ]; then
export COMPOSE_PATH_SEPARATOR=":"
export COMPOSE_FILE="${COMPOSE_FILE}:${here}/docker-compose-custom.yml"
fi
(
cd ${root};
docker-compose "$@"
)
Btw. docker-compose config
does not seem to use $COMPOSE_FILE
, so unfortunately one cannot use that to double check the config. The script could be improved, but at least I have a reasonable way now to "import" a compose file in another one.
You could also define something like this inside of your .bashrc
or .bash_profile
:
alias docker-compose='docker-compose -f docker-compose.yml $(for file in "include/compose/*"; do echo "-f $file"; done)'
This assumes that your other Compose includes are in include/compose
(they can be in any directory) and that any parent services (i.e. any services that are extend
ed from by other services, possibly defined by a manifest within the include/compose
directory) are defined in docker-compose.yml
.
I thinks this issue can be closed @dnephin Since it's developed as an Add-on https://github.com/dnephin/compose-addons#dcao-include and won't be included in the main project.
I disagree. Many templating languages support including sub-templates to discourage DRY and encourage better organization. I think Compose should as well.
On Jan 6, 2019, 10:33 -0600, Luis Hdez notifications@github.com, wrote:
I thinks this issue can be close @dnephin Since it's developed as an Add-on https://github.com/dnephin/compose-addons#dcao-include and won't be included in the main project.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
@carlosonunez I would love to see this included but after 4 years looks that it's not gonna happen anytime soon. 🤷♂️ And this issue has been followed in https://github.com/docker/compose/pull/758
I mean just to keep issues clean and concise.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
bump
This issue has been automatically marked as not stale anymore due to the recent activity.
I recently discovered the COMPOSE_FILE
env var in combination with the env_file: .env
syntax. What I do now, is have a main docker-compose.yml
, and extensions to it like docker-compose.local.yml
, docker-compose.prod.yml
, etc. Then, inside the .env
file (which will be different for every environment), I write something like
COMPOSE_FILE=docker-compose.yml:docker-compose.local.yml
Now, all you need to do is have parallel env files (which you probably had anyway), and switch them out based on the environment you want, eg, cp local.env .env
That seems like it exactly implements the need for include docker compose
files. Thank you.
The link to docs: https://docs.docker.com/compose/reference/envvars/
Dne so 12. 10. 2019 16:00 uživatel Kabir Sarin notifications@github.com
napsal:
I recently discovered the COMPOSE_FILE env var in combination with the env_file:
.env command. What I do now, is have a main docker-compose.yml, and then
extensions to it like docker-compose.local.yml, docker-compose.prod.yml,
etc. Then, inside the .env file (which will be different for every
environment), I write something likeCOMPOSE_FILE=docker-compose.yml:docker-compose.local.yml
Now, all you need to do is have parallel env files, which you probably
need anyway—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/318?email_source=notifications&email_token=ABO3HMPT5B5HGT3O7IPWAE3QOHKBVA5CNFSM4ART4DKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBCACAY#issuecomment-541327619,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABO3HMJ27ALLG3E2FITBDD3QOHKBVANCNFSM4ART4DKA
.
Including YAML bits in general would be useful. For instance, say we have a port mapping
ports:
- "8080:80"
and want to turn it off in a CI build. There doesn't seem to be a way to do it. If we could write something like (inspired by Ansible playbooks)
ports: {{ my_ports.yml }}
where my_ports.yml
contained a YAML array, it would be possible.
@reitzig it's worth mentioning you could specify these in a .env
file, which is an o-k workaround ime.
# docker-compose.yml
ports:
- ${MY_PORT}:${MY_PORT}
# .env
MY_PORT=4000
@sarink And what exactly do I put into .env
so that there's no error but no port exposed?
I'd still love to have this feature!
This feature sure would be helpful. How many years now?
I find that using multiple, "layered" docker-compose.yml 80/20-solves this for me. It's not ideal, as I end up writing wrapper scripts around docker-compose (a wrapper), but well.
Hello,
How can i include a compose file via HTTP ?
docker-compose up -f http://test.fr/docker-compose.yml ?
That's a needed feature ! We can actually do docker-compose -f mycompose.yaml.
Is it possible to add at least support for HTTP file protocol ?
@sarink And what exactly do I put into
.env
so that there's no error but no port exposed?
I haven't tried it, but you could probably do the following:
# docker-compose.yml
ports:
- "${MY_PORT}"
# dev.env
MY_PORT=4000:4000
md5-a85aaa13f346957baaac28f3905fa6c7
# prod.env
MY_PORT=4000
It's not great, but the docs https://docs.docker.com/compose/compose-file/#ports don't indicate there is a special value that maps the host port randomly. It only works in the absence of a host port by the looks of it.
EDIT: You may actually be able to do that with the long syntax https://docs.docker.com/compose/compose-file/#long-syntax-1 , by omitting the published
line entirely in the prod version
Hello,
How can i include a compose file via HTTP ?
docker-compose up -f http://test.fr/docker-compose.yml ?
That's a needed feature ! We can actually do docker-compose -f mycompose.yaml.
Is it possible to add at least support for HTTP file protocol ?
Why not
curl http://test.fr/docker-compose.yml -O && docker-compose up -f ./docker-compose.yml
Most helpful comment
Any news about this?