we miss a possibility to copy a file or directory using docker-compose. I find this really useful.
Please check many +1 in premature closed https://github.com/docker/compose/issues/2105
What's the usecase? Most of the suggested usage I've seen were antipatterns.
You can see some of many usecases clicking at link provided. As you can see many of subscribers consider it as really useful feature instead of "antipattern"
ooops, now I see "something" happened to issue #2105 as there are no comments at all anymore...
Perhaps I provided wrong link...
so, I find really useful to copy some configuration/initialization files to container. As example some *.sql stuff for db containers, some html/js/css content for apache/nginx containers or even jar file for java container. This will make it available/runnable "globally" not only on machine where it was composed as in case of mounting volume(s). Mainly this will be some combination of host-local and container-contained files. In fact any container can be considered useless without any configuration or initialization
this is correct link: https://github.com/docker/compose/issues/1664
+1
This will make it available/runnable "globally" not only on machine where it was composed as in case of mounting volume(s)
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
The actual solution to your issue is to include those necessary files in your build (Dockerfile) and rebuild when an update is needed.
of course, if it is composed including all "shared" content into container, scaling 10-20-100- containers would be much easier. Everything you need is to pull it from repository and mount(yes, in this case mount) only node-specific config. And even more, you don't need run docker-compose on each node.
Sure we can use docker-compose in combination with build: & Dockerfile, however things become little more complex and yaml configuration in docker-compose is much more "elegant" :o)
I'm running into an issue where copy would come in handy (at least as an override). I mostly develop on mac so I almost never see an issue with commands running as root in the container and exporting to a mounted volume. However, recently using the same workflow on a CentOs has caused some major pain because files owned by the root user are being added to the host via the mounted volume. I would like in these cases to just be able to copy the host files to the container instead of mounting them.
The related issue: #1532
I think in my case I can get away with using COPY in the Dockerfile and having multiple docker-compose files one of which uses a volume mount.
Use-case:
I want to use directory from read-only file system inside container. Application creates new files in that directory, but because filesystem is read only this cause errors.
I can't use rw volume, because filesystem is read only.
I can't use ro volume, because effect will be the same.
It would be awesome to make writes that are persists only when container runs. I can make wrapper (https://stackoverflow.com/questions/36362233/can-a-dockerfile-extend-another-one) to only COPY
files, but making this in compose, similar to volume
, would be better
Use case: starting multiple docker containers simultaneously from .gitlab-ci.yml which need to write into the git repository's directory.
If the process inside a container fails or if the ci job is cancelled before the container has cleaned up after itself, the remaining files can't be deleted by gitlab-runner due to lack of permissions. Now I could copy the files within the container out of the volume into another directory, but that would be an antipattern, wouldn't it?
Is this different from volumes: - ./folder_on_host/ :/folder_in_container/
?
I am able to copy files from host to container (equivalent of COPY) this way in my compose file
@harpratap you are right, but the drawback is that /folder_in_container must not exist or must be empty or else it will be overwritten. If you have a bash script as your entry point, you could circumvent this by symlinking your files into the originally intended directory after you create a volume at /some_empty_location
+1 for having a COPY functionality. Our use case is for rapidly standing up local development environments and copying in configs for the dev settings.
+1 for COPY. This would really be a helpful feature.
Use case: in swarm mode, I have a service using mysql image. I need to copy my initialization scripst in /docker-entrypoint-initdb.d/ so that MySQL can execute them.
Though it is possible to create an image on top of mysql, copy the files and use it or connect to the mysql
task in swarm and then manually run the scripts, it's kinda unnecessary in my opinion.
+1 for COPY/ADD,
Use Case:
Fluentd requires the configuration files to be moved into the container during run time. These config files are created on the run time by our Jenkins Engine and without a COPY/ADD in docker compose it simply fails.
+1 for COPY
Suppose one has a shared config file across a number of docker machines, with their Dockerfiles in respective subdirectories under the docker-compose directory. How do you copy that shared config into each image? I can't symbolically link to ../
from the Dockerfile context without getting COPY failed: Forbidden path outside the build context
In this instance when running docker-compose build, I'd like to copy the config files from the docker-compose context prior to running the docker build steps.
I'm happy if someone can suggest a clean workaround of course.
This would be nice to have feature !!
Please don't comment with just +1 - it's a waste of everyone's time. If you have additional information to provide, please do so ; otherwise, just add a thumbs up to the original issue.
What is the use of dogmatically insisting it is antipattern, just because in _some_ cases it could _eventually_ cause problems? This definitely has a good use as you could add one line to an existing file, instead of having to create an extra folder and file, then move the file to be added there. This pointless, bureaucratic creation of tiny files is the real antipattern, preventing users from creating simple and easy to maintain docker-compose files.
If users want to do harmful things with Docker, they will find a way no matter what you do. Refusing to add legitimate features just because someone may misuse them one day is foolish.
I think what you are doing is actually the right way to go about it, in this instance.
The issue here that was raised was more like, suppose that the mongo.conf file was shared between three docker images which are orchestrated by one docker-compose file. How do you ensure that it is the same in each docker build subdirectory?
If you use symbolic links for instance, docker complains that the file is external to the build environment, e.g. the docker build lacks a sense of reproducibility as modifications outside that directory could alter the build.
So the only way to orchestrate this is with a file copy, which one currently needs to do with a Makefile or shell script prior to running docker-compose, so it seemed like an idea to discuss whether this was a feature that docker-compose could do, as surely it's a common use case.
The issue you are raising seems to be more about runtime (launch-time) injection of a local file modification.
I think you're actually fine in what you're doing, what you've said above is just how it's done. A docker image can always be constructed to accept environment variables to answer questions such as where is the config directory, and that config directory can be "injected" using a volume at runtime - but that is up to the design of the docker image, leveraging environment variables and volume mappings (which are features docker supports as runtime config modification.)
I hope I haven't misinterpreted your comment, and that my reply is helpful.
@jpz - I somehow deleted my original comment - yikes - sorry! Thank you - yes, that's helpful.
My original comment was along the lines of:
My use case is that I want to declare a service using mongo
without having to create my own custom image just to copy over a configuration file like /etc/mongod.conf
.
UPDATE: I used volumes
. A year or two ago - I thought I had tried this with a bad experience... but it seems fine.
+1 for COPY
I created a quick gist for this. It assumes the docker compose service is named phpfpm
, however you can change this to whatever you wish. feel free to modify.
https://gist.github.com/markoshust/15efb29aa5eebf8adae402af18b2e674
Hello, I would like to know how is the progress for this issue. Now, I'm using windows 10 home with docker-toolbox. It seems mostly error when I try to bnd mounting file as a volume into container. It would nice to have COPY capabilities in docker-compose
COPY/ADD would definitely be a welcome feature.
A usecase: running a Graylog instance in Docker for Dev purposes. In order to launch an input automatically, a JSON spec has to be put in /usr/share/graylog/data/contentpacks
With the COPY/ADD feature, it'll be as easy as single line in YML.
In order to get it working now (on Oct 16, 2018), need to mount a volume to that point AND copying the original content of that folder to the persistent volume. Which is quiet inconvenient.
I would benefit from that, i have a set of tools that import a database seed into a container and then i run the devtools database importer based on that file. I don't want to have to do:
docker cp "${seed_file}" $(docker-compose ps -q devtools):/tmp/seed_file
to be able to import my seed. And no, i will not compile my dev images with a fixed schema, this goes against web development pattern at the very least. Containers should be for app portability, not data.
It would make way more sense to do:
docker-compose cp "${seed_file}" devtools:/tmp/seed_file
All in all, it is just a short-hand that basically does the same thing, but it looks better to leverage docker-compose
everywhere than to mix stuff...
1) this seems to be a duplicate of #3593
2) i agree with @shin- that the elaborated use-cases are following an anti-pattern
3) but wrapping up Docker's cp
command makes sense, imo
@funkyfuture If you think that these use-cases follow an antipattern, then please suggest a solution that does not.
What about k8s-like "data section" ?
For example:
services:
service1:
image: image.name
data:
filename1.ini: |
[foo]
var1=val1
[bar]
var2=val2
filename2.yml: |
foo:
bar: val1
or perhaps the same but for the volumes:
section
volumes:
service_config:
data:
filename1.ini: |
[foo]
var1=val1
[bar]
var2=val2
services:
service1:
image: image.name
volumes:
- service_config:/service/config
@shin-
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
The actual problem here is that some people are too quick to diss requested features because it conflicts with their limited vision of actual use case scenarios.
Here I am looking for a way to copy my configuration file into a container which I just got from dockerhub. I dont have access to the original Dockerfile and it would be a great convenience to have this feature, (instead of trying to build another layer on top, which would work, but is inconvenient, I dont want to rebuild when I change something).
Use case:
I run a database in an integration test environment and want the data to be reset on each iteration, when the containers are started. Embedding the data in a custom image would work, but mounting a volume is cumbersome - because the data on the host must be reset.
We maintain the data independently and it would be most convenient to just use the standard database image - copying data to it before it starts running. Currently this does not seem to be possible with docker-compose.
I have a use case in mind. I want to base my image from an off the shelf image, such as a generic Apache server. I want to copy my html during image creation. That way I can update my base image whenever I want and the copy directive will ensure my content is included in the new image.
BTW I currently use dockerfiles and a build directive in my docker-compose.yaml to do this. It would be nice if I didn't need the docker files.
@tvedtorama -
Use case:
I run a database in an integration test environment and want the data to be reset on each iteration, when the containers are started. Embedding the data in a custom image would work, but mounting a volume is cumbersome - because the data on the host must be reset.
We maintain the data independently and it would be most convenient to just use the standard database image - copying data to it before it starts running. Currently this does not seem to be possible with docker-compose.
This issue discusses the desire to copy files at image build time, not at runtime. I would suggest raising a separate ticket to discuss the merits of that? It may confuse this discussion to digress into discussing runtime file injection (which I interpret what you are talking about.)
@c0ze -
What about k8s-like "data section" ?
For example:...
I'm not fully up to speed with what that config does, but yes, that looks like it would be a solution. Fundamentally when you have secrets (e.g. what is the login username/pwd/port to the database), how do I inject that into my docker images - clients and servers - without writing a load of code?
Something like kubernetes data section could work - as it would be a single-source of truth. Otherwise one may find they have the same secrets maintained multiple times across multiple docker-images.
There's also prior art there, which helps to move the conversation along to whether this is actually a good idea worth adopting or not.
For me, this all started with wanting to share an invariant config file across containers, and realising there was no way to do it without scripting externally to docker-compose, and writing the config from a single-source-of-truth into each of the Docker folders beneath the docker-compose folder. Of course I get the immutability argument for Docker (e.g. the Dockerfile directory fully and completely describes how to build the image) so asking for automation to copy things into that directory looks like it slightly flies in the face of those principles.
I guess the discussion is how intrusive is docker-compose allowed to be? Is this a common enough use-case to justify such automation? If it is not, then we appear to burden the environment variable passing mechanisms with the responsibilities for injecting secrets from outside in from a single source of truth, late (e.g. at runtime.) I hope my points are coherent enough here.
This is not of great import to me, but I think the use-case is worth to discuss.
It would be extremely useful to me. At work, the virus software blocks the ability for windows 10 to share volumes with containers. It is a huge org and it's a non-starter to get them to change due to a policy set on another continent.
Hello, my use case: I'm using open source Prometheus docker-compose setup (repo is maintained by other people). It has configs that are mounted into containers. ISSUE: I can't do docker-compose up on remote machine (like aws docker-machine or inside of CI/CD runner) 'cause it can't mount configs properly. In this case I'd like to copy/embed them. For RW data there are volumes, for RO - ?
Having RO volumes with possibility to set initial data is the the other option.
Current solution: connect to docker host via ssh, clone/update repo and run docker-compose up. This works for manual case, but it's pain for automation :(
+1
Use-case: I have a development docker machine that runs a database and whenever I set it up I need a recent dump of the database to be installed. Effectively that means:
Now the big problem is that step 2 will always be different for each developer, because there are many different dump versions of that database, so the easiest would be if each developer has his own compose file with their specific dump location/version, and then have docker assemble the image with that specific file location while composing, that can then be also changed on the fly when a different version is required.
My use case is simple. I don't want volumes nor do I want to roll my own image. I just want to put a simple defensive copy of a config file in a container after it's created and before it's started.
is this still a issue?
I have a django application with a very long settings file. For me it would be just way easier to create a docker image and copy a single configuration file to each container.
Passing all the settings as ENV is for me the antipattern. Takes a lot of code, is difficult to maintain and could be solved with a single copy command.
I opened #6643 and would love feedback on how it would be considered an anti-pattern. Especially, in an environment where numerous configuration files could have a need to be added/modified on-the-fly.
@shin-
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
How does work docker-compose exec
with multiple containers ?
--index=index index of the container if there are multiple
instances of a service [default: 1]
Shouldn't we try to get the same behavior with cp
?
IMHO exec
is as much ephemeral as cp
would be. But I always consider it "development" commands anyway, development environments must be ephemeral shouldn't they ?
I hadn't seen the comment about a lot of devs here saying that they are short-sighted by trying too quickly fix this by requesting this feature. I think this is a little harsh and condescending. If there is one thing i've learned from my years of development it is the following:
It's not what your software does, it's what the user does with it that counts
Obviously, i understand that you have a role to prevent things from going crazy, but it's not because someone uses a tool incorrectly based on your vision that everyone will start to do it that way and all hell will break loose.
All of the special cases i've seen here are very appropriate most of the time. And, most of these special cases shouldn't and wouldn't happen on production system, they are, for example, like my case that i explained a while ago, to customize a development environment and run special files in a container that cannot use a volume mapping. Most examples say clearly they don't want to bake in schemas, data, or config files and cannot use volume mapping so i don't see why this is so much aof an inconvenience as too use the term "Short-Sighted".
I think you should carefully weight your words when saying things like that...
Let's bring it back. Honest technical question here. With docker stack we have a "configs" option. That's a native docker feature but it's for services, not containers. What's the viability of getting something like that working at the container level rather than the service level? How does docker stack implement config provisioning? Can that implementation be replicated for docker-compose specifically?
At least half the use cases mentioned here are about configs, so many people would be satisfied if just that itch were scratched.
Another simple use case is things like googles domain validation. If you use the wordpress image you can't add a file that google will check for. You need to make a whole new image to do it.
Also these comments saying things are "anti-pattern" barely make sense, reeks of elitism.
EDIT: yikes, read more, thank god he isn't the maintainer anymore
So you're telling me that if I want to copy a tiny config file into a prebuilt image (say, nginx
or mariadb
), I now need to manage my own image build setup and duplicate the disk space used (original image and configured image)?
This ought to be a feature.
duplicate the disk space used
you're not when you're using Docker.
I like how you nitpick one thing out of what he said which is the most minor thing in all of it. This should be a feature. This issue will just grow and grow because of people getting here as docker grows since it is a common use case and people will just expect it exists because of common sense, something the maintainers ex and current here seem to lack.
I like how you nitpick one thing out of what he said which is the most minor thing in all of it.
an invalid argument should be noted as such.
i think the thing here is that the "anti-pattern" argument can be valid given a certain business strategy (see @washtubs point). we may not agree with this strategy, but that doesn't justify personal attacks. in the end it's @shin-'s past efforts with docker-py
that would allow you to implement an alternative to docker-compose
.
What "anti-pattern" argument? There is no argument made. It's just a "no, because anti-pattern" without any logic behind it, just saying it without anything backing it up. It's like the people saying it thought of the worst case scenario on their head, decided that scenario was an anti-pattern and then dismissed everything as such without even writing about their so called anti-pattern scenario.
It's just elitism. Many comments here have been over how ridiculous the reasoning for not adding this is and they are all ignored.
Common sense and logic doesn't care about your feelings or elitism. Or your made up anti-patterns.
Yeah, @robclancy, please keep it civil FFS. I want this feature, but if all you're gonna do is talk shit at the maintainers, go vent on reddit please. @funkyfuture 's earlier correction is completely warranted.
in the end it's @shin-'s past efforts with docker-py that would allow you to implement an alternative to docker-compose.
I obviously don't want a fork of docker-compose, if that's what you're suggesting, especially for such a minute enhancement. That's the only other way this is going to happen, and that would be bad for the community.
If someone submitted a PR, would it actually be considered? Or is this something the docker-compose team has just firmly decided they won't accept? Would something along the lines of adding a config section that's compatible with docker stack configs be something you will consider?
This has gone off the rails... 'anti-pattern' without explanation turns 'anti-pattern' into a very broad definition that is impossible to argue against. There is also no clear direction on which side the 'anti-pattern' sits on; docker or docker-compose.
A clear definition of the anti-pattern responses would be fantastic and much appreciated.
The community is going to continue to grow so an established set of definitions needs to exist.
I want to use it to copy artifacts generated by a jenkins pipeline running on a docker compose stack. And then, the container name can be random, so I can't use docker cp
.
Today I must use
docker cp $(docker-compose -f docker-compose.development.ci.yml ps -q test):/app/tests_output ./tests_output
Is this different from
volumes: - ./folder_on_host/ :/folder_in_container/
?
I am able to copy files from host to container (equivalent of COPY) this way in my compose file
I am trying to do same. I have a folder with a csv file and I would like to supply it to logstash.
how can I do that. or which folder in container?
at the moment I have something this:
./path/to/storage:/usr/share/logstash/data:ro
Any suggestions would be helpful
@shin- This ticket is now 1.5 years old. When 160 people tell you you're wrong - you probably are.
What else do you need to convince you that this should be implemented?
@isapir, the companies that don't listen to their customers, tend to go out of the business rather soon. So I guess we should see some production-ready docker alternatives in the near future.
@shin- This ticket is now 1.5 years old. When 160 people tell you you're wrong - you probably are.
😆 🤣 💯 🥇 😲 😮
I'm not a maintainer anymore. Please stop @-ing me on things I no longer have any control over.
@sfuerte There is a little project named Kubernetes that has already replaced Docker-Compose. I wonder if that would have happened had the attitude towards user feedback been more positive.
We need a buzzword to counter their buzzwords. It's all they can deal with.
This feature would totally be pro-pattern
. That should do it. The difference is that even though I made that stupid thing up there is many comments in this issue showing the advantages of this in ways that are clearly common use cases. And there isn't a single instance of an anti-pattern
.
@shin- you get tagged in this because you started this bullshit antipattern crap with no basis in reality. So stop crying about something that you caused.
k have fun
My case is:
I think the easiest way to solve this is to have 1 compose file for dev and 1 compose file for production.
The problem here is that i can specify "volumes" on the docker file, but i can't specify "copy" on the docker file?
Is anybody in the same case as me? Am i missing something?
@shin- is this an anti-pattern? how would you go about solving this issue?
@hems, in a perfect world, you want your application to be deployed as a standalone docker image. So if you're writing an application, the source code that you intend to deploy should probably be part of the Dockerfile
, so the image contains your entire application. So in the Dockerfile
, if you wanted your source in /var/www you would put
COPY my-app-src /var/www
Your source isn't environment specific so it just belongs in the docker image. Easy.
Most of us want to include an environment specific config file into the containers that makes an existing image work well with a particular docker-compose configuration. And we want to be able to do this without making a volume for a small file, or rolling a new image.
Can someone from the docker-compose team please just take a serious, impartial look at this and draw a final verdict (hopefully one that ignores all the immature people)? This issue's been open forever. The result is important, but personally I'm tired of getting notifications.
COPY my-app-src /var/www
that's what I'm saying, in developing I want to use my docker-compose file to mount VOLUMES into the images and during production build i want to COPY files into the images, hence why i think we should be able to COPY and mount VOLUMES using the docker-compose file, so i can have 1 compose file for dev and 1 for production build.
I work on the team that maintains Compose and am happy to jump into this discussion. To start I'll outline how we see the responsibilities of Dockerfiles and Compose files.
Dockerfiles are the recipe for building images and should add all the binaries/other files you need to make your service work. There are a couple of exceptions to this: secrets (i.e.: credentials), configs (i.e.: configuration files), and application state data (e.g.: your database data). Note that secrets and configs are read only.
Compose files are used to describe how a set of services are deployed and interact. The Compose format is used not only for a single engine (i.e.: docker-compose
) but also for orchestrated environments like Swarm and Kubernetes. The goal of the Compose format is to make it easy to write an application and test it locally, then deploy it to an orchestrated environment with little or no changes. This goal limits what we can change in the format because of fundamental differences like how each environemtn handles volumes and data storage.
Cutting up the responsibilities of the Dockerfile and Compose file like this gives us a good separation of concerns: What's in each container image (Dockerfile), how the services are deployed and interact (Compose file).
I'll now run through each of the exceptions to what you store in an image. For secrets, you do not want these baked into images as they could be stolen and because they may change over time. Docker Secrets are used to solve for this. These work slightly differently depending on which environment you deploy to but essentially the idea is that you can store credentials in a file that will be mounted read only to a tmpfs directory in the container at runtime. Note that this directory will always be /run/secrets/
and the file will be the name of the secret. Secrets are supported on Swarm, engine only (docker-compose
), and Kubernetes.
For configuration files or bootstrapping data, there is Docker Configs. These work similarly to secrets but can be mounted anywhere. These are supported by Swarm, and Kubernetes, but not by docker-compose
. I believe that we should add support for these and it would help with some of the use cases listed in this issue.
Finally there is application state data which needs to be stored externally. I won't dive into this as it's not related to this issue.
With that framing, I can answer a couple of questions:
copy
field to the Compose format? No, I don't think we will as it doesn't make sense in orchestrated environments.configs
support to docker-compose
? Yes, I think that we should.docker-compose cp
? Maybe, I'm not sure about this yet. It would essentially be an alias for a docker container cp
.Given that, there are a couple of tools that can be used here:
I _think_ those tools solve all the problems raised in this thread.
This thread is quite heated. Please remember that there is a real live person behind each GitHub handle and that they're probably trying to do their best (even if their frustration is showing). We're all passionate about Compose and want the project to continue thriving.
Will we add a
docker-compose cp
? Maybe, I'm not sure about this yet.
i'd find that a helpful convenience like docker-compose exec
.
@chris-crone Amazing response, thank you!
I know I don't speak for everyone, but I get the impression that configs
support satisfies the vast majority of the interest in here. Shall an issue be opened for this?
And thanks for offering some alternative approaches. I didn't know about multi-stage builds until now.
I get the impression that
configs
support satisfies the vast majority of the interest in here.
i doubt this as i suspect that the majority here is not using Swarm and afaik the config
functionality requires that.
Yes, currently Swarm is required, but from @chris-crone's comment ...
These are supported by Swarm, and Kubernetes, but not by docker-compose. I believe that we should add support for these and it would help with some of the use cases listed in this issue.
... I'm reading that this can be implemented in docker-compose (sans Swarm)
The goal of the Compose format is to make it easy to write an application and test it locally, then deploy it to an orchestrated environment with little or no changes.
In complex apps we may have quite a few configuration files that need tweaking on-the-fly. Right now the most efficient (time & cost wise) way of doing that is to fill up the volumes key (because no sane person is going to create a different image while testing multiple configurations.. unless they have a boss that just loves spending money on dev hours).
Swarm and config are not really going to answer several of the use cases listed. "Separation of concern" is also not applicable as compose already does what you can do in docker, but simplifies it. A wrapper isn't separation... we're just asking you to extend it a bit more...
https://github.com/docker/compose/issues/6643
Get hacky with it.. extend volume functionality where every file under the new key is dynamically linked to a singular volume and mapped to their respective internal paths...
I think there are two scenarios here that are perfectly valid, one is about
development environments. People create flexible environments with source
code mounted into their images. The source code evolves as the development
occurs and you cannot rebuild the image constantly or you just waste
enormous amounts of time. Thats my scenario exactly and i can see that this
scenario applies to a lot of other people.
The second one is about production images where you bake your source code
(in case you are working with non-compiled scripts) into your image (and
then again, i wasn't, i was still mounting it on my side) or you just
compile your application and copy it into the final image. At that point,
the application becomes extremely portable.
I think everyone understands that! The question is, do the docker-compose
dev took the time to read out the cases and understand the needs? There are
no anti-patterns here in theory, just devs that have a need and would like
to be respected.
We love docker, docker-compose and all the ecosystem, we use it because we
love it and because we use it, you have jobs (at least some of you are paid
for it i hope).
Something i learned in the last years that i like to bring back here and
there is the following and it applies very well to this scenario:
It's not what your software does that matters, it's what your users do
with it that matters
Cheers and happy continuity!
ᐧ
On Thu, 6 Jun 2019 at 10:55, jadon1979 notifications@github.com wrote:
The goal of the Compose format is to make it easy to write an application
and test it locally, then deploy it to an orchestrated environment with
little or no changes.In complex apps we may have quite a few configuration files that need
tweaking on-the-fly. Right now the most efficient (time & cost wise) way of
doing that is to fill up the volumes key (because no sane person is going
to create a different image while testing multiple configurations.. unless
they have a boss that just loves spending money on dev hours).Swarm and config are not really going to answer several of the use cases
listed. "Separation of concern" is also not applicable as compose already
does what you can do in docker, but simplifies it. A wrapper isn't
separation... we're just asking you to extend it a bit more...6643 https://github.com/docker/compose/issues/6643
Get hacky with it.. extend volume functionality where every file under the
new key is dynamically linked to a singular volume and mapped to their
respective internal paths...—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=ABBR3OMQH62242SM4QN5Y7TPZEQP7A5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXDDP4Q#issuecomment-499529714,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABBR3OMOZFZ47L6ITHPF2TDPZEQP7ANCNFSM4EKAVONA
.
I want to spin up a docker Tomcat environment to run my app from a .war which is not named ROOT.war
. To do this, I have to copy it to Tomcat's webapps
dir and rename it to ROOT so that it will run on the currently bound ports 8005/9. Anything else fails due to rebinding issues on the ports with errors about 'illegal access'. These are ephemeral test builds so it can't go in the Dockerfile. This is why I want it in docker-compose
@washtubs
I know I don't speak for everyone, but I get the impression that configs support satisfies the vast majority of the interest in here. Shall an issue be opened for this?
If there isn't an issue already for this please create one and link it here. I've added something in our private team tracker.
@washtubs @funkyfuture
... I'm reading that this can be implemented in docker-compose (sans Swarm)
We already have rudimentary secret support and configs could be implemented in a similar way.
Definitely a missing feature. The only "antipattern" is when you have to work around the fact that this is hard to do by other means, like for example changing the entry point script of the dockerfile, or bind mounting files into the container.
What you want is a container that is built once, preferably officially) and configurable for the use case, at the point of use, i.e. docker-compose.
As far as I can see what docker folks fail to realise is that the "Dockerfile" is the biggest antipattern in the whole docker concept, particularly since the whole thing is utterly unreadable and unmaintainable. It really makes me laugh when anyone connected with docker throws out the word "antipattern" like they would know!
The Dockerfile actually prevents the normal debugging and tidying up that would be available if you used a build script, or something actually designed for building stuff, like... a package manager, or make.
For myself I use the same DockerFile for all use-cases (making it a pattern!) suggesting that I go and change my DockerFile for every different usage, really is anti-pattern.
And no "configs support" doesnt cut it at all, imposing structure where is just isnt needed.
The fundamental problem is that if you bind mount to say /etc/nginx it has to be rw to allow scripts to run that adjust the configurations (aka. envsubst). And this then makes changes to input configuration (which needs to remain immutable)... You dont get much more antipattern than a container writing all over its configuration, so an option for copying files into the container at re-creation time is the necessary solution.
In other words, it is a bind mount directory rw in the container, but ro on the host. Seriously would it kill you to allow this?
Definitely a missing feature. The only "antipattern" is when you have to work around the fact that this is hard to do by other means, like for example changing the entry point script of the dockerfile, or bind mounting files into the container.
What you want is a container that is built once, preferably officially) and configurable for the use case, at the point of use, i.e. docker-compose.
As far as I can see what docker folks fail to realise is that the "Dockerfile" is the biggest antipattern in the whole docker concept, particularly since the whole thing is utterly unreadable and unmaintainable. It really makes me laugh when anyone connected with docker throws out the word "antipattern" like they would know!
The Dockerfile actually prevents the normal debugging and tidying up that would be available if you used a build script, or something actually designed for building stuff, like... a package manager, or make.
For myself I use the same DockerFile for all use-cases (making it a pattern!) suggesting that I go and change my DockerFile for every different usage, really is anti-pattern.
And no "configs support" doesnt cut it at all, imposing structure where is just isnt needed.
The fundamental problem is that if you bind mount to say /etc/nginx it has to be rw to allow scripts to run that adjust the configurations (aka. envsubst). And this then makes changes to input configuration (which needs to remain immutable)... You dont get much more antipattern than a container writing all over its configuration, so an option for copying files into the container at re-creation time is the necessary solution.
In other words, it is a bind mount directory rw in the container, but ro on the host. Seriously would it kill you to allow this?
Something like this:
```
svc:
copy:
- './source/filename:/path/filename:ro:www-data'
- './source/dir:/path/dir:ro:www-data'
# or
svc:
copy:
- source: './source/file'
destination: '/destination'
permission: ro
owner: owner
group: group
- source: './source/directory'
destination: '/destination'
permission: ro
owner: owner
group: group```
Use case: We have a unorchestrated container solution where we have our application's docker-compose files incl. SSL certs etc. inside a Git-repository and pulling it onto a VM. Then we spin up the service and want to move e.g. the SSL certs, config files etc. into the container's volume. This is currently not possible without an accompanying Dockerfile with a COPY command featured. We don't want to mess around with the files inside the cloned git repo. If the application would alter the files, we would have to clean up the repo every time.
@MartinMajewski then you can mount directory with certificates as volume and point it in you application config.
Use case (and how-to question at once):
I have postgres
image with one single environment variable to be set at start: POSTGRES_PASSWORD
. I want to set it via Docker Secret. What I need to do is just put my own entrypoint.sh
that will export attached Secret into env var of running container. I need to add this entrypoint somehow into my container at launch. Without two-line Dockerbuild – I cannot. Copy of one single file – cannot be done.
PS postgres
is an example. Assume it doesn't support _FILE
env vars.
Internal tracking issue https://docker.atlassian.net/browse/COMPOSE-89
Use case: Karaf
Using a karaf base image that I do not want to rebuild everytime I build my project, I want to be able to deploy my app quickly and rebuild the container for every build. However, I need to copy a _features.xml_ and _jar_ into the deploy directory when starting up the container.
My solution until now was to use the karaf image as a base image in yet another Dockerfile (relying on overlayfs--which runs out of overlays eventually, forcing a manual deletion of the image) and avast/gradle-docker-compose-plugin. While the init commands can surely be passed as an environment variable, the contents of the features.xml cannot. It must be stored as a file in a specific location in the container. Right now, I can only use a volume bind mount to do this. How do I get stuff into that volume on a remote machine? I need yet more logic in my build script (e.g. org.hidetake.groovy.ssh, which also complicates the build script with secret password/key logic). If a docker-compose cp were available, I could just add the necessary copy command to the docker-compose.yml. avast/gradle-docker-compose-plugin would handle building the container and copying the files from my build output directly into the container without any extra remote filesystem access logic.
This Dockerfile is added to my docker-compose.yml build portion of the script. If anything, this is an antipattern, because it just adds overlays to the upstream docker image with each build (until I am forced to manually delete the image--which makes builds much slower).
FROM myregistry:443/docker/image/karaf-el7:latest
COPY karafinitcommands /usr/local/karaf/etc/
COPY features.xml \
*.jar \
/usr/local/karaf/deploy/
I find it frustrating that docker cp works fine for runtime copying, but docker-compose has no equivalent mechanism.
I thought that the idea is to bind mount a local directory to /usr/local/karaf/deploy and drop your files in there. I would not expect to have to rebuild the image or use a docker file to aheive this.
I thought that the idea is to bind mount a local directory to /usr/local/karaf/deploy and drop your files in there. I would not expect to have to rebuild the image or use a docker file to aheive this.
It is certainly achievable that way. Reread and notice that this is purely a convenience issue: The container gets rebuilt by gradle build, the next logic step is: How do I move the new build files into the "local directory" mounted at /usr/local/karaf/deploy? In my case, a "local directory" is more accurately a "host directory" where the host is a remote host. So I have to add rsync or something else to my build script just to get files there and make sure old ones are replaced, and extra ones are removed. It would be unnecessary if docker-compose cp were available. I could utilize my existing docker client to docker daemon connection, which I have setup over port forwarding.
Docker volumes can be removed with each build. Bind mount volumes cannot. They will be repopulated only if they are empty (persistence protection mechanism). Of course, emptying a bind mount on a remote machine require certain permissions and access logic that could all be avoided with a docker-compose cp.
Again, a copy into a runtime environment can be achieved with docker cp. That is the frustrating part.
Ah, ok I'm too used to my own setup. I use http://github.com/keithy/groan a bash script that self deploys the bits and pieces to remote servers, then we invoke docker.
Use case: google cloud build and building artifacts
Artifact needed: web client (auto-generated) react graphql bindings. You need the server running to create the files needed for client compilation. The client image has the tools to create the bindings, given a server address. So you start the server image in the backgound, and now need to run the client container pointing to the server. Now how to get the generated files out of the container, and into the "workspace" host directory? Mounting directories is not allowed, since you're already in a mounted directory in a docker container. Being able to docker-compose cp
would alleviate the extra painful step of getting the container id.
Relying on $(docker-compose ps -q SERVICE)
to target the right container make it possible to use plain docker cli for such container-centric operations. Introducing a new command would for sure make it simpler for the few use-cases who ask for it, but it is not required. To avoid more code duplication between compose and docker CLI, I think this issue should be closed.
There is an open issue where the build cache between compose and plain docker is different, due to the version of the docker daemon compose is using, meaning that you need to use pure compose to not break caches in CI environments (https://github.com/docker/compose/issues/883) so until those issues are resolved, mixing plain docker commands with compose commands breaks caches. The compose config specifies all kinds of baked in config, alleviating the need to then manually specify the duplicate configuration with plain docker
commands.
Relying on
$(docker-compose ps -q SERVICE)
to target the right container make it possible to use plain docker cli for such container-centric operations. Introducing a new command would for sure make it simpler for the few use-cases who ask for it, but it is not required. To avoid more code duplication between compose and docker CLI, I think this issue should be closed.
This goes much deeper than "Few use cases mentioned" because those scenarios are fairly common and the modify, build image, modify again, build image, etc is a time sink verses being able to handle those things through docker-compose. The argument of "you can do it in the docker cli so just do it there" pretty much nullifies numerous other things that have been added to docker-compose.
This one issue has been opened for almost a year and there are numerous other discussions about it outside of this issue. It most definitely should not be closed unless it's actually resolved.
@dionjwa #883 really need to be investigated (if still relevant) as docker-compose should be aligned with docker CLI.
@jadon1979 I'm not trying to block this feature request, just noticed it has been opened more than 1 year ago, and none of the core maintainers did considered it important enough to introduce a new command, neither did a contributor propose a PR for it.
I'm just saying that, according to the feedback on this feature request, and lack of development effort to offer a "better way", the proposed workaround to use a combination of docker-compose and docker cli, which you can easily alias in your environment to keep it simple to use, is a reasonable workaround.
Now, if someone open a PR to offer a new cp
command I'd be happy to review it.
Noone contributed because everyone was told that every use case was an
anti-pattern. And every few days we have new use cases posted, none
anti-patterns.
On Mon, Nov 18, 2019 at 5:31 PM Nicolas De loof notifications@github.com
wrote:
@dionjwa https://github.com/dionjwa #883
https://github.com/docker/compose/issues/883 really need to be
investigated (if still relevant) as docker-compose should be aligned with
docker CLI.@jadon1979 https://github.com/jadon1979 I'm not trying to block this
feature request, just noticed it has been opened more than 1 year ago, and
none of the core maintainers did considered it important enough to
introduce a new command, neither did a contributor propose a PR for it.
I'm just saying that, according to the feedback on this feature request,
and lack of development effort to offer a "better way", the proposed
workaround to use a combination of docker-compose and docker cli, which you
can easily alias in your environment to keep it simple to use, is a
reasonable workaround.
Now, if someone open a PR to offer a new cp command I'd be happy to
review it.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=AAGRIF2NS64IYANNVTGFTULQUL3TZA5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEELZ6CQ#issuecomment-555196170,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAGRIFY7CULCUS3TDDTTHZLQUL3TZANCNFSM4EKAVONA
.
My use case isn't copying things _into_ a container, it's copying them _out_ of the container after it has run. This can be done from the CLI using a clunky workaround that produces arguably degraded functionality. Full details below.
I'm a DevOps engineer, and I rely heavily on containers as an alternative to the dependency hell of bare-metal build agents. When my CI system tests a repo, it starts by building from a Dockerfile within that same repo, and running all the checks (bundle exec rspec
, npm test
, etc) _inside the container_. If there are build artifacts created like documentation or test results, I simply copy them out of the container with docker cp
.
For integration tests, we've started to use docker-compose
to provide service dependencies (e.g. a database server) to the container running the tests. Unfortunately, the "docker CLI workaround" is less useful in this case for copying files out.
Consider this config: docker-compose-minimal.yml
version: "3"
services:
artifact-generator:
image: busybox
I'm going to create the container, run a command in that container, get the container ID, and try to extract the file using docker cp
$ # Prepare the images and (stopped) containers. In this case there is only one.
$ docker-compose --file docker-compose-minimal.yml up --no-start
Creating network "docker-compose-cp-test_default" with the default driver
Creating docker-compose-cp-test_artifact-generator_1 ... done
$ # Determine the ID of the container we will want to extract the file from
$ docker-compose --file docker-compose-minimal.yml ps -q artifact-generator
050753da4b0a4007d2bd3514a3b56a08235921880a2274dd6fa0ee1ed315ff88
$ # Generate the artifact in the container
$ docker-compose --file docker-compose-minimal.yml run artifact-generator touch hello.txt
$ # Check that container ID again, just to be sure
$ docker-compose --file docker-compose-minimal.yml ps -q artifact-generator
050753da4b0a4007d2bd3514a3b56a08235921880a2274dd6fa0ee1ed315ff88
$ # OK, that looks like the only answer we're going to get. Can we use that to copy files?
$ docker cp $(docker-compose --file docker-compose-minimal.yml ps -q artifact-generator):hello.txt ./hello-artifact.txt
Error: No such container:path: 050753da4b0a4007d2bd3514a3b56a08235921880a2274dd6fa0ee1ed315ff88:hello.txt
$ # Nope. Let's take a look at why this is
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e2cb5d38ba0 busybox "touch hello.txt" About a minute ago Exited (0) About a minute ago docker-compose-cp-test_artifact-generator_run_dd548ee686eb
050753da4b0a busybox "sh" 2 minutes ago Created docker-compose-cp-test_artifact-generator_1
As you can see, docker-compose ps
really has no knowledge of the updated container ID. This is unfortunate. This wouldn't be so bad if there was a way for me to know that run_dd548ee686eb
was somehow related to the docker-compose run
I executed, but I see no way to achieve that.
There is a clunky workaround for this workaround, which is to add --name
to the run command:
$ docker-compose --file docker-compose-minimal.yml run --name blarg artifact-generator touch hello.txt
$ docker cp blarg:hello.txt ./hello-artifact.txt
$ ls
docker-compose-minimal.yml hello-artifact.txt
Success! ...kinda
The problem here is that if I have multiple builds running in parallel, I need to go to the trouble of making the --name
s globally unique. Otherwise, I'll get noisy collisions in the best case and corrupted results (no error, but wrong file extracted) in the worst case. So this is clunky because I now have to reinvent container ID generation rather than just using the one that Docker already created.
At a bare minimum, I'd like some way to know the ID of the container that results from the docker-compose run
command.
@ndeloof
Relying on $(docker-compose ps -q SERVICE) to target the right container make it possible to use plain docker cli for such container-centric operations.
False, see demonstration in previous comment.
We will have new use cases for years in here. Wait I mean new anti
patterns...
On Fri., 13 Dec. 2019, 11:40 Ian, notifications@github.com wrote:
@ndeloof https://github.com/ndeloof
Relying on $(docker-compose ps -q SERVICE) to target the right container
make it possible to use plain docker cli for such container-centric
operations.False, see demonstration in previous comment.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=AAGRIF2NFPTKY3QKRIXQ5RTQYONHLA5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEG2E7QQ#issuecomment-565465026,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAGRIF3S4UHF5NG3VKYXJB3QYONHLANCNFSM4EKAVONA
.
Who can we mention to get to the maintainers? This issue is pointless until they start to talk to us. It might be simple "it cannot be done due to current software architecture", whatever. But leaving such an issues inert isn't something you would like to see from this highly popular piece of solutions like Docker...
Our deployment builds the Docker image with bazel, uploads it to our Docker Registry, then uses Terraform docker_container
resources with upload
stanzas to copy config files to the container. I need to migrate this deployment process to use docker-compose instead of Terraform. I am surprised that docker-compose provides no function for per-container configuration.
This issue has been open for 2 years. Is this why Kubernetes is outpacing Docker in popularity? Kubernetes provies config and secrets functions. Docker Team, please at least add config functionality.
tbf docker-compose isn't exactly comparable to k8s, and not recommended for production use. It's meant for development and quick testing. docker swarm is the thing to compare to k8s and although it is also very simplistic, it does have features like configs and secrets.
If it's meant just for development then that's even more reason this issue
should work. The crappy "anti pattern" rules shouldn't even be that
important (I say crappy because it's clear by the abundance of normal use
cases that it isn't anything resembling an anti-pattern).
On Tue, Mar 3, 2020 at 12:56 PM David Milum notifications@github.com
wrote:
tbf docker-compose isn't exactly comparable to k8s, and not recommended
for production use. It's meant for development and quick testing. docker
swarm is the thing to compare to k8s and although it is also very
simplistic, it does have features like configs and secrets.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/5523?email_source=notifications&email_token=AAGRIFZTKGRWMZZ5H6DG3FDRFUSEJA5CNFSM4EKAVONKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENUBMTQ#issuecomment-594024014,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAGRIF4NTQQSR2QQWPJT6PLRFUSEJANCNFSM4EKAVONA
.
Another "anti-pattern":
I use docker-compose
for container orchestration during local development, and k8s for production.
Per Docker's own advice, I've implemented the wait-for-it.sh
script in order to manage service startup / shutdown order.
As it stands, unless I want to mount a volume in each service for just this one file, this requires a copy of the script in each service's Dockerfile-containing directory.
Instead, I'd like to maintain a single copy of the wait-for-it
script in a top level directory that docker-compose
then copies into each container when running locally, as such concerns are otherwise managed in k8s, meaning I don't want these concerns polluting my services' Dockerfile
s.
As Emerson once wrote: "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."
Perhaps it's time to listen to your users...
@Phylodome can't you use container health checks and docker-compose
depends_on
? That's how I ensure healthy container startup dependencies.
My understanding is that wait-for-it.sh
is really a hack, since your services themselves should be resilient to dependencies coming and going. Startup is just an individual case of that.
@ianfixes Is "your services" meant to refer to the docker-compose services themselves, or "our" services, as in, the services written by us who uses docker-compose? I don't know if you are writing in the role of a "user" or a docker-compose developer.
Is "your services" meant to refer to the docker-compose services themselves, or "our" services, as in, the services written by us who uses docker-compose?
The services you build as a developer should be resilient. This is according to these docs: https://docs.docker.com/compose/startup-order/
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason. However, if you don’t need this level of resilience, you can work around the problem with a wrapper script:
And it goes on to mention various wait-for scripts.
I could do a number of things. But because this is just for local development, and because I have other strategies for handling production service checks in k8s, I would prefer the simplest and least obtrusive local implementation, not generic advice from people who don't know the details of why I'd like to do this (e.g. issues w/ volume-mounting in order to perform UI dev via Webpack's dev server).
In any case, it's just another in the long list of use cases for this would-be-feature that should be left to the user's discretion.
I'm hearing anger directed toward me, and I understand why it would be frustrating to hear unsolicited "advice" for your approach. But I'm not even sure how to apologize; I quoted the text from the URL that you yourself referred to as "Docker's own advice", which says _explicitly_ that the wait-for script is a way to "work around the problem". For what it's worth, I'm sorry anyway.
You're not hearing anger. You're hearing the exasperated tone of someone who, upon googling for what should be a fairly obvious feature, stumbled upon a hundred-comment thread in which a set of maintainers continuously patronized and rejected the community's pleas for an entirely valid feature.
I didn't share my experience here b/c I wanted an apology. I shared it simply to add to the long list of evidence that Docker users would like additional flexibility when using compose
.
Of course, like any tool, that flexibility comes alongside the potential for abuse. But that same potential, if not worse potentials, exist when your users must find workarounds to solve for their specific use cases that could be solved far more simply by just adding this feature.
Additionally, apologetically gaslighting your users is a bad look.
I am neither a maintainer of nor a contributor to this project, and I apologize for any confusion there. It sounds like what little assistance I thought I could offer was unwanted and unhelpful, and I'm sorry for wasting your time.
I want this feature for a Go container which is part of my distributed application. Since the .env
file needs to be included in the root of the Go application, I'll need to create a separate .env
for it...Whereas, if I had this feature, I could have my top level .env
file and copy that into the Go container when I build. It would mean less stuff I need to keep track of...
My workaround could be to create this file via the Go container's Dockerfile
or just make an .env
file for that container. But still, anytime I add a new env var, I'll need to update it in, possibly, two places. Good use case here. Or I could just use a shell script to cp
the file for me...
+1 for COPY feature
we already achieve this in Kubernetes with side cars, and there are MANY use cases. This is NOT an anti-pattern, just one of the features keeping docker-compose back.
Maybe I am missing something, but right now when we are building our app for 5 minutes, all that time the build folder is "in flux", and the app will not start due to inconsistency.
I would prefer to _copy_ a build folder into a container, so when it is time to start the container it will take over the internal one. In that way the app is only offline for a second or so, when stop/start the container.
How is this an anti-pattern when docker
already supports it? It would make sense that docker-compose
follows as close docker's usability - not doing so is in itself an anti-pattern.
The problem with this is that it is incredibly short-sighted (hence the term "anti-pattern"), as it will force you to repeat the operation every time the containers to be recreated. Not to mention the fact that it scales very poorly (what if you have 10 containers? 20? 100?)
I think that is up to the developer. Simply copying a single local configuration file has insignificant overhead. Don't blame the knife.
P.S. My usecase; I want to add a config to an Nginx container in a project without Dockerfiles.
Who even knows anymore.
I needed to setup a new project and looked for new
tools, Lando is so much better than this it's crazy. Wish I used it
sooner.
It's faster, easier to understand, better out of the box support and
doesn't have condescending (ex)maintainers/contributors.
@chris-crone regarding your comment...
For configuration files or bootstrapping data, there is Docker Configs. These work similarly to secrets but can be mounted anywhere. These are supported by Swarm, and Kubernetes, but not by docker-compose. I believe that we should add support for these and it would help with some of the use cases listed in this issue.
Is docker-compose interested in implementing config support for parity with swarm configs?
If there is a ticket for this (or if I need to make one that's fine too), I would like to subscribe to that and unsubscribe from this trash fire. Personally I would close this and link to that, but that's just me.
@harpratap you are right, but the drawback is that /folder_in_container must not exist or must be empty or else it will be overwritten. If you have a bash script as your entry point, you could circumvent this by symlinking your files into the originally intended directory after you create a volume at /some_empty_location
+1 for having a COPY functionality. Our use case is for rapidly standing up local development environments and copying in configs for the dev settings.
Exactly. We don't all scale the same way. My company uses SALT to build the required .conf files for a variety of apps. One build - with the basics - then docker-compose to create the individual instances based on their unique parts: MAC address, IP, Port, Licenses, Modules, etc.. It COULD be done from a command line - but much easier and less error prone from docker-compose.
I have a use case. We have a test build that requires ssl to be set up. The certs are generated by a service in the docker-compose... I then to add those certs to the client containers... if I mount I lose the existing certs and I can't put it in the docker build because they don't exist yet.
Consequently I have to run 2 docker-compose - 1 to fire up the services to create the certs and then another to build the services and run the tests. Messy.
I've seen a lot of issues here, where users have suggested a lot of use cases for a feature, but they're shot down coz a maintainer thinks, it's an anti-pattern, or people would not use it or some other story.
While it might seem like an anti pattern to one person, I'm sure the 1000 people requesting for the feature, who think otherwise, need to be heard as well. If some help is needed developing the feature, I think many people can lend a hand.
My use case: In addition to the configs, I have some libraries(RPMs) that I need installed in 5 of my Rails application containers(Debian). Different Ruby/Rails versions, so can't use the same base image, so I should be able to store the files at a single location & copy them to a container when building, coz I don't want to download 1.5GB of data while building.
@gauravmanchanda
My use case: In addition to the configs, I have some libraries(RPMs) that I need installed in 5 of my Rails application containers(Debian). Different Ruby/Rails versions, so can't use the same base image, so I should be able to store the files at a single location & copy them to a container when building, coz I don't want to download 1.5GB of data while building.
Have you looked at multistage builds for this? I think it would be a more robust solution.
Multistage builds allow you to use the same Dockerfile for multiple images. This allows you to factor them and only include bits that you need in each image.
A good example of one is the one we use to build Docker Compose. This builds using either Debian or Alpine but allows us to factor common code.
In our setup, we ramp up about a dozen simulators with docker-compose. The simulators are otherwise the same, but one init file is different for each target and this file is consumed on startup (gets deleted when server is up and running). Are you really suggesting that we should create about a dozen almost identical images just because one file differs? That does not make sense IMO.
With docker, the --copy-service flag can be used to achieve this. Is there any alternatives we can use with docker-compose?
Hi @megaeater,
we ramp up about a dozen simulators with docker-compose. The simulators are otherwise the same, but one init file is different for each target and this file is consumed on startup (gets deleted when server is up and running).
This is an interesting use case; some questions: Are these simulators (or parts of them) ever run in production (i.e.: not on the developer's machine or a CI)? If the code is open (or a similar system is) could you please link me to it so that I can take a look?
It would also be interesting to know why you would want a copy instead of bind mounting or a volume for these files?
Are you really suggesting that we should create about a dozen almost identical images just because one file differs? That does not make sense IMO.
Images are layer based for exactly this reason: all the images will reference the same layers except for the layer that includes the different files.
The issue with things like a copy on container create is that it makes taking the same code and running it in production difficult (i.e.: requiring major logic rewrite) because the pattern will be fragile or impossible in orchestrated environments.
This is not to say that we should never implement something like this in Compose. Rather that when a change means that users will not be able to reuse something that works locally in production, we like to pause and see if there is a more robust way of achieving the same goal.
Thank you for the comment @chris-crone
We are not running docker in production, it is just for development purposes. The problem with using volume (if I understand it correctly) is that the simulator (3rd party) has this startup script which deletes the file on startup. Script execution stops if the deletion fails, so we would need to mount it as rw. And if the file deletion is allowed, we would need to have a mechanism to create a temporary directory for supplying these files so that the originals would not get deleted. So we would need to have some kind of extraneous scripts to ramp up the composition on top of docker-compose.
@chris-crone Thank you for the links. I'll take a look and see if it works for us 👍
Hey @chris-crone I did try using Multi Stage builds, and it did help us keep the libraries/config at 1 location only and copy them around, but now there are issues with .dockerignore
being ignored, no matter where I place it.
It works when I'm just using Docker with the new DOCKER_BUILDKIT
option, but doesn't work when using docker-compose, tried COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build
, but still didn't work. Any ideas?
I was wondering, if there was an option to specify where to look for the .dockerignore
in compose, when I stumbled upon this issue https://github.com/docker/compose/issues/6022, which again, was closed, coz 1 contributor thinks this is not useful.
This is pretty frustrating if I'm being honest here!!
This is critical on MacOS, because getting your development cycles as close to production as possible is of paramount importance; obviously for proper Continuous Delivery practices. e.g. build the container, but then bind mount your new version of the software that you're currently working on into the container to save on build cycle times. Unfortunately, bind mounts are extremely costly, being 3 to 5 times slower.
As an example, startup time of tomcat is about 3s for my app in a container. Add a bind mount of ~/.bash_history and it's 4s. Add a bind mount of my app and it's usually about 18-20s. In Linux bind mount performance is like that of a local file system, but not in MacOS. Scale that to 100 times per day and that's significant.
That's not including the slowness that continues when accessing the app for the first time; until the code files are cached. For me, that means 3 minutes, including lag over the internet connecting to the monolithic oracle db to change a small phrase to something else, and see if it's still looking alright. Damn covid-19, lol.
Ideally, I'd like to be able to just run docker-compose again and "update" my app in the running container, and ask tomcat to reload. I could use the tomcat manager to upload the change, but we also have a back-end app that doesn't use a managed container of any kind, so we'd then have to use a different solution to that.
It'd be nice if docker-compose was geared towards development too, not just a production deploy.
This use case is relevant to the discussion: https://github.com/docker/compose/issues/3593#issuecomment-637634435
@chris-crone
@gauravmanchanda
My use case: In addition to the configs, I have some libraries(RPMs) that I need installed in 5 of my Rails application containers(Debian). Different Ruby/Rails versions, so can't use the same base image, so I should be able to store the files at a single location & copy them to a container when building, coz I don't want to download 1.5GB of data while building.
Have you looked at multistage builds for this? I think it would be a more robust solution.
Multistage builds allow you to use the same Dockerfile for multiple images. This allows you to factor them and only include bits that you need in each image.
A good example of one is the one we use to build Docker Compose. This builds using either Debian or Alpine but allows us to factor common code.
Multistage builds are cool, but they suffer from their own issues, for one you have to run all stages within the same context, which is not always possible. Also, as far as I know, you cannot easily use COPY --from
with images defined in another Dockerfile and built with docker-compose build
(I assume you could do so by building and tagging them manually).
COPY
in itself is very limited in that you can only import from your build context. docker cp
can copy from anywhere to anywhere, except it cannot copy between image and container (sort of like COPY --from
).
My own use case is a bit different (apart from copying read only config files, local volume mounts are not the best idea when you deploy to another machine) and I would say that what I'm doing right now is an antipattern... . I have potentially several different images that on build generate compiled and minified JS + HTML + assets bundles (think small angular apps), and a single nginx server that serves all of them (n.b. built from a custom image because of plugins).
Currently, what I have to do is to copy the "deploy" packages from the "build" images on startup. Ideally, this should be done either on container create, or on build, but the latter would require creating another image on top of the "modded nginx".
Image the following project layout (subprojects may live in separate repositories and not know about each other):
app1/
src/
...
Dockerfile
app2/
src/
...
Dockerfile
app3/
src/
...
Dockerfile
nginx/
...
Dockerfile
docker-compose.yml
Each of files app{1,2,3}/Dockerfile
contains a target/stage build
that build the app to /usr/src/app/dist
. nginx/Dockerfile
has one stage only and build an image similar to nginx/nginx
, but with all required plugins (no configs).
docker-compose.yml:
version: '3.8'
services:
app1-build:
build:
context: app1/
target: build
image: app1-build
entrypoint: ["/bin/sh", "-c"]
command:
- |
rm -vfr /dist-volume/app1 \
&& cp -vr /usr/src/app/dist /dist-volume/app1 \
&& echo "Publishing successful"
volumes:
- 'dist:/dist-volume'
app2-build:
build:
context: app2/
target: build
image: app2-build
entrypoint: ["/bin/sh", "-c"]
command:
- |
rm -vfr /dist-volume/app3 \
&& cp -vr /usr/src/app/dist /dist-volume/app3 \
&& echo "Publishing successful"
volumes:
- 'dist:/dist-volume'
#... same thing for app3-build
nginx:
build:
context: nginx/
image: my-nginx
volumes:
- 'dist:/var/www'
- # ... (config files etc)
volumes:
dist:
Now, this is obviously non-ideal, each app-building image is unnecessarily ran and finishes quickly, the deployed images reside on a shared volume (I'm assuming this has negative performance impact, but I couldn't verify it yet). If a copy
or copy_from
was a docker-compose option, the same could be written as:
version: '3.8'
services:
# assuming the images have default entrypoint and cmd combination that immediately returns with success.
app1-build:
build:
context: app1/
target: build
image: app1-build
#... same thing for app2-build app3-build
nginx:
build:
context: nginx/
image: my-nginx
copy:
- from: app1-build # as image or service, both have their pros and cons, service would mean an image associated with this service
source: /usr/src/app/dist
destination: /var/www/app1
- from: app2-build
source: /usr/src/app/dist
destination: /var/www/app2
- from: app3-build
source: /usr/src/app/dist
destination: /var/www/app3
volumes:
- # ... (config files etc)
My use case is not in the build step or startup step. I'm fetching files generated inside a container or all container of a service, these container are executed on a remote Docker Engine. So far I find myself doing something like docker-compose ps -qa <service> | xargs -i docker cp {}:<there> <here>
. I just wish I can stick to docker-compose uniquely in my script.
@chris-crone
It would also be interesting to know why you would want a copy instead of bind mounting or a volume for these files?
Do you enjoy self flagellation? If so, I recommend running an application using a bind mount on MacOS. 🤣 See my previous post for the details.
This is not to say that we should never implement something like this in Compose. Rather that when a change means that users will not be able to reuse something that works locally in production, we like to pause and see if there is a more robust way of achieving the same goal.
@chris-crone I think this is a great sentiment, because all too often people get into implementing anti-patterns for docker, such as not managing configuration and data in an ephemeral way.
I wonder if we could somehow get docker and Apple to work together on fixing performance problems with bind mounts. For me at least, I'd have no more need for a docker compose cp option, because I'd be using bind mounts for development. Right now though it's just waaaay too painful to use bind mounts. I may switch to a virtual machine with Linux, cause my Mac just bytes.
@megaeater
We are not running docker in production, it is just for development purposes. The problem with using volume (if I understand it correctly) is that the simulator (3rd party) has this startup script which deletes the file on startup. Script execution stops if the deletion fails, so we would need to mount it as rw. And if the file deletion is allowed, we would need to have a mechanism to create a temporary directory for supplying these files so that the originals would not get deleted. So we would need to have some kind of extraneous scripts to ramp up the composition on top of docker-compose.
Hmm.. If you could engage with the simulator vendor, I think that is the best way of fixing this issue. You could maybe work around this with an entrypoint script for the simulator that moves the files as required; granted this would be messy.
@gauravmanchanda
it did help us keep the libraries/config at 1 location only and copy them around, but now there are issues with
.dockerignore
being ignored, no matter where I place it.
It works when I'm just using Docker with the newDOCKER_BUILDKIT
option, but doesn't work when using docker-compose, triedCOMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build
, but still didn't work. Any ideas?
Glad multistage builds helped! What version of Docker and of docker-compose are you using? I would try with the latest and see if the issue is still there. It should respect the .dockerignore file.
@Marandil, it sounds like docker build
isn't handling your project structure (i.e.: directory structure) which is the issue. You might be able to use something like docker buildx bake
(https://github.com/docker/buildx) to solve this use case. Note buildx is being worked on so isn't super stable yet but it aims to solve some of what you're hitting.
@itscaro, thanks for your input! What we do internally to generate things in containers is use docker build
to output the result from a FROM scratch
image. This only works in cases where you need a single container's output.
@TrentonAdams we have been working on improving filesystem performance for Docker Desktop but it is tricky. The underlying issue is traversing the VM boundary. The file sharing bits have recently been rewritten (you can enable the new experience using the "Use gRPC FUSE for file sharing" toggle in preferences) and this should solve some of the high CPU usage issues that people had been seeing. We have some documentation on performance tuning here and here.
@chris-crone
@Marandil, it sounds like
docker build
isn't handling your project structure (i.e.: directory structure) which is the issue. You might be able to use something likedocker buildx bake
(https://github.com/docker/buildx) to solve this use case. Note buildx is being worked on so isn't super stable yet but it aims to solve some of what you're hitting.
Thanks, I'll look into docker buildx bake
. It looks promising, but I couldn't find any good reference nor documentation for it, and the pages on docs.docker.com are rather bare (cf. https://docs.docker.com/engine/reference/commandline/buildx_bake/). So far I found https://twitter.com/tonistiigi/status/1290379204194758657 referencing a couple of examples (https://github.com/tonistiigi/fsutil/blob/master/docker-bake.hcl, https://github.com/tonistiigi/binfmt/blob/master/docker-bake.hcl), that may be a good starting point, but hardly a good reference.
@TrentonAdams we have been working on improving filesystem performance for Docker Desktop but it is tricky. The underlying issue is traversing the VM boundary. The file sharing bits have recently been rewritten (you can enable the new experience using the "Use gRPC FUSE for file sharing" toggle in preferences) and this should solve some of the high CPU usage issues that people had been seeing. We have some documentation on performance tuning here and here.
@chris-crone Hell yes, thanks so much! There is a 3-4s improvement with the new option, and using "cached" gives me the same performance as running outside of the container, so this is HUGE for me. I'm seeing times as low as 2800ms startup time for our app, so that's not 11-18s anymore. YAY! I don't need anything other than cached, because I'm just re-creating the containers every time anyhow.
@chris-crone Is there a place I should post performance stuff for helping with the performance tuning and feedback on MacOS? I'm wondering why a freshly started container with bind mount would be slow when not using cached
. There must be some weird thing where it's going back and forth checking every file on startup if they are in sync, even when it's brand new?
Use-case: I run a container and it modifies a file (specifically, Keycloak modifies its configuration file based on environment variables etc). I want a copy of that file on my local disk so I can check the outcome of that modification, and track my progress over time as I modify the container scripts. Currently, I need to find the new container ID each time so I can use docker cp
.
Use-case:
developing in docker.
i need to back propagate my lock file to the host machine or it get overwritten when the container mounts the project folder.
Use case: I need to copy a file containing a secret key. The app that runs inside the container reads that file into memory and deletes it from disk.
Use case: I am running c++ unit tests in a docker container. I want to simply copy over the code to an existing image each run.
1) Doing this with a separate dockerfile COPY
means the code gets written to a new, unnecessary image and I need to delete that image to ensure the next run creates a new image with the latest code.
2) Doing this with docker-compose volumes
yaml config means Docker chowns the source code as root:root
(totally killing my IDE from making edits till I chown it back!)
@shin- am I following an anti-pattern by running unit tests in a container? What's the non-anti-pattern way you would solve this?
.... I am sticking with option 1 as it is the least pain. But I see docker-compose supporting a copy config being such an awesome enhancement! at least for this workflow!
@soulseekah Isn't using secrets in compose better for that use case?
I found a workaround for that works for me:
COPY a_filename .
docker build -t myproject:1.0 .
version: "3.7"
services:
app:
image: myproject:1.0
ports:
- 3000:3000
networks:
- mynetwork
- internal
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: not_so_secret_password # don't do this
# https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/
MYSQL_DB: appdb
deploy:
resources:
limits:
cpus: '0.75'
memory: 100M
Not a perfect workaround, but it works in my use case.
@soulseekah Isn't using secrets in compose better for that use case?
Unfortunately that requires swarm last time I tried :(
@soulseekah Isn't using secrets in compose better for that use case?
Unfortunately that requires swarm last time I tried :(
@soulseekah Maybe use workaround that I use (above you)?
@ChristophorusReyhan the problem with that work around is indicated in @zoombinis comment:
Doing this with a separate dockerfile COPY means the code gets written to a new, unnecessary image and I need to delete that image to ensure the next run creates a new image with the latest code.
While a working solution, it can lead to some unwanted maintenance. For instance, to cleanup the unwanted image _while also preserving any images you care about_:
docker-compose up && docker-compose down --rmi local
But make sure all images you care about have a custom tag and the test/dummy image does not
Most helpful comment
What is the use of dogmatically insisting it is antipattern, just because in _some_ cases it could _eventually_ cause problems? This definitely has a good use as you could add one line to an existing file, instead of having to create an extra folder and file, then move the file to be added there. This pointless, bureaucratic creation of tiny files is the real antipattern, preventing users from creating simple and easy to maintain docker-compose files.
If users want to do harmful things with Docker, they will find a way no matter what you do. Refusing to add legitimate features just because someone may misuse them one day is foolish.