Compose could watch your code and automatically kick off builds when something changes. This would mean:
It could either be an option to up
(docker-compose up --watch
), an option in the config file specifying directories to watch/ignore, or perhaps a separate command (docker-compose watch
).
Thanks to @samalba for the suggestion!
:+1:
:+1:
+1. If this is to be part of _fig up_, how about an opt in flag?
web:
watch:
- *.py
- *.css
- *.coffee
:thumbsup:
Where this gets documented, it's probably worth noting the following pattern that helps avoiding unnecessary package manager (pip/npm/bower/et al) action:
# adding your language deps before adding the rest of your source....
ADD requirements.txt /requirements.txt
RUN pip install -r requirements.txt
# ... will prevent changes to source incurring a re-install of language deps
ADD . /src
:100:
+1
:+1:
:+1:
It's been a few months, and I couldn't tell from the merge referenced above, was a "watch" ability integrated?
No, fig watch
has not been implemented.
It's still on the table though - see ROADMAP.md
An idea from @fxdgear: instead of rebuilding, it could also be useful to just restart the container when the code changes. E.g. when you've mounted your code with a volume.
Now that we have a --no-build
flag, perhaps the logic should be:
$ fig up # current behaviour
$ fig up --watch # watch and rebuild + restart
$ fig up --watch --no-build # watch and restart
That said, if we go down the #693 path, we'll have to revisit this.
:+1:
For this to be really awesome, there's got to be a way for docker build to accept or pass through some type of credentials without storing it in the image. I'm thinking private github modules but it really applies to any environment.
The following is a NodeJS-based example using my current version of "fig up --watch" which is just a nodemon-based script:
I run fig up --watch
and the first time npm install
runs because the package.json
hasn't been cached yet. This needs to fetch private modules from github so it passes those through (maybe this is setup in my ~/.figconfig or something similar?) My .dockerignore has node_modules
in it so those are never included in the build context.
Now, I still want to do things like npm link
where I can change a common module and fig up --watch
knows to reload. I currently do this by adding a service to my fig.yml
and mounting it to my common "code" volume. I can't think of a better way to do this at the moment.
what about the right tool for the right job ?
wouldn't guard
fit perfectly here ?
guard :shell do
watch(/Dockerfile$/) { `fig build` }
watch(/fig.yml$/) { `fig up -d` }
end
Anyway, it would be a nice addition.
@amjibaly:
For this to be really awesome, there's got to be a way for docker build to accept or pass through some type of credentials without storing it in the image. I'm thinking private github modules but it really applies to any environment.
...
I run fig up --watch and the first time npm install runs because the package.json hasn't been cached yet. This needs to fetch private modules from github so it passes those through (maybe this is setup in my ~/.figconfig or something similar?)
This sounds out of scope for fig or docker build
to me. Like in this example, if you're referencing private Git repos as NPM dependencies, you could use the [email protected]:me/myrepo.git
style of remote and then Git would just defer to SSH for key authorization without NPM, docker build
, or fig needing to know anything about it. Most SCMs work similarly with SSH, or at worst other underlying tools should likely check an environment variable or config file of their own. But maybe there's something else you had in mind that I'm not considering?
@ches I didn't realize docker build
already supports this. I just gave it a shot and it doesn't seem to work, can you point me to the documentation? Here's the error I see:
npm ERR! git clone [email protected]:myorg/myproj.git Cloning into bare repository '/root/.npm/_git-remotes/git-github-com-myorg-myproj-git-9e8969c5'...
npm ERR! git clone [email protected]:myorg/myproj.git error: cannot run ssh: No such file or directory
npm ERR! git clone [email protected]:myorg/myproj.git fatal: unable to fork
In package.json
I have:
"dependencies": { "myproj": "git+ssh://[email protected]:myorg/myproj.git#master" }
@amjibaly looks like you need to install ssh in your container
@docteurklein I tend to agree, there are existing tools which would handle this pretty easily (the python watchdog package includes a cli called watchmedo
which would also work).
@dnephin I installed ssh, now I'm getting this error:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
Running npm install
locally works of course.
@amjibaly you're totally off-topic, no?
@docteurklein yeah my bad. FYI @ches it turns out that docker build
does not support forwarding ssh creds so my original point still stands: https://github.com/docker/docker/issues/6396
wouldn't it be too slow to work with everyday programming process? rebuild take time, restart too.
Not entirely sure if this should be something Docker Compose should do! I for one have several other build steps in between of noticing a file change and ultimately restarting the application; such as compiling, transpiring, concatenation etc.
Restarting the application when a file changes is something that should be handled by developer tools inside of the container. I'd rather see better support for multiple Docker Compose environment configurations in order to better support a developer configuration which are different from a prod configuration to accommodate for restart on file-change.
I've actually done an implementation of a feature like this at one point (in figgo) and I'm -1 on fig watch
. Even with a tiny little app it's just slow enough to be annoying, I don't think I would use it. It'd be easy enough to write a little program to wrap fig and incorporate this functionality that I don't think we'd stand much to gain by supporting it.
+1
+1
+1
+1
I am really with @Starefossen. Same opinion here. (:
I'd rather see better support for multiple Docker Compose environment configurations in order to better support a developer configuration which are different from a prod configuration to accommodate for restart on file-change.
Hm, I'm not sure if this would solve what you're after, but take note that you can specify alternative configuration files using the -f
option for docker-compose
. I have one project where I have a docker-compose.yml
and a docker-compose-prod.yml
, for instance, to change the DB password and a few other things.
@nathanleclaire I think the extends option added in 1.2.0 now supports that model even better. For example, both the dev and production compose files are based off the same common file, but the dev compose file also mounts volumes and watches for code changes on the components.
FWIW I'd agree with @Starefossen, I'm not sure that's something Compose itself should do but rather the containers themselves can watch for changes.
-1. IMHO this would go against a clear separation of responsibility. If Compose is supposed to watch code change, it would then need to check unit test before deploying, then maybe add triggering, etc. Soon, it'll end up doing everything a CI tool does, and when one tries to do too much, generally it does not do it in the best way. In short, too much coupling ; we need easily pluggable atomic services.
-1 agree with @jp-gouigoux
- Volumes are no longer needed in development
- Production server can be used in development (e.g. gunicorn can be used to serve Django apps in development)
I don't think this feature would actually accomplish these two (of three) goals. A build + restart is often way too slow for interactive development. Using a volume is still going to offer much better results.
Part of development is debugging with an interactive debugger, which isn't going to be possible running inside a production server, so there is still a need to run a different entrypoint to the app.
I'm still very much in favor of #693, which would make this feature difficult (maybe impossible as they are conflicting ideas).
I could see a feature where watch
tailed the event stream (#1510) and when a new image event is triggered, run docker-compose up -d
to recreate any containers that have new images.
Then a separate build tool could watch files and rebuild the image when an image has changed. The rebuild of an image would trigger the recreate of a container, but they would be separate from each other, so that any build system would be supported.
Im no docker expert, but couldn't you mount your own drive and then run a normal gulp task for dev environments and then do something different for production envs ??
Here's how I've done it: https://github.com/pakastin/docker-boilerplate
how to I implement the same feature in Marathon... ie while deploying the docker image how do I force marathon to pull the latest code changes in its docker deployement???
@bfirsh Are you a maintainer on this project? Is this at the suggestion point, or at the "this is something we actually want" phase? I could possibly implement something built in to compose if that's the direction we want to take.
A work-around: docker-compose watch (a-k-a docker-compose reload) one-liner
Btw. PM2 can nowadays handle Docker and watch files like magic. I definitely would recommend to check it out:
http://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/
馃槈
I would argue against this consider docker swarm utilizes docker compose files as well. If you need that functionality, use volumes.
@edward-of-clt I agree. But what if you want to ignore some folders? .dockerignore
doesn't work for that. Also, I think making docker-compose volumes workarounds kinda ugly.
What about instead of rebuild docker we keep files in sync? I mean, if I change a file which is supposed to be inside Docker container, docker will have an updated version of this file.
I agree with folks saying that rebuilding is not a responsibility of Docker itself, your app should've got this responsibility. Docker will only keep files up to date.
(I'm a newbie at Docker environment, IDK if this feature exists or not)
@eduardomoroni - keeping in sync is easy for static files - eg some .html file has changed and you can sync it easily (by just showing newer version of this file). In many cases, however, where some custom logic is involved, simple file change can totally change the way how some service is working. Thats why, usually full restart of such service is required.
Anyway - it've been 4 years, since issue was created. Would be so cool to have it
This will more than likely not be done solely because it seems to be outside the scope of Docker Compose.
Your best bet would be to implement a service such as watchman, and have it build within the container. (https://facebook.github.io/watchman/)
If worse comes to worse, quite frankly, you could have Docker built into your container and map /var/run/docker.sock
and, if need be, force a service update docker service update --force <service_id>
.
@pie6k Thanks for you suggestions, I achieved desired behaviour using a lib to rebuild files if anything changes (nodemon) and then using volume to identify if files was changed.
https://github.com/project-australia/Sydney/blob/master/docker-compose.yml
Yup, I know nodemon is great for that. But it works if you use node only.
I am using a webhook written in golang which listens to any git push
to trigger my app rebuild / redeploy process ... https://github.com/adnanh/webhook ... I have this running inside a container which rebuilds then does a docker-compose up ... love seeing my webapp get code updates with zero devops manual intervention
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'd say it's not stale
This issue has been automatically marked as not stale anymore due to the recent activity.
compose now offers option to run builds using the docker CLI builder, which can be configured to use BuildKit. As the later doesn't require to send the whole docker context to daemon before a build start, and offers advanced caching capabilities, it seems a good candidate so service images can be rebuilt on a regular basis.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
spam for stale bot
This issue has been automatically marked as not stale anymore due to the recent activity.
i followed this thread from 2014 now to 2020 to find a clue on how to auto build microservices (Go) binary in dev when a file is change. am getting tired of running two many terminal so am looking into docker to see how it can help my life
Also looking at https://github.com/cortesi/modd
6 years into the request and I still cannot easily rebuild my docker without creating freaking volumes.
What is this.
Most helpful comment
+1. If this is to be part of _fig up_, how about an opt in flag?