With v2 I had the following setup:
VOLUME
keyword.volumes_from: code
.If I change my source code I rebuild the code container, push and pull it and rebuild my docker-compose services.
version: '2'
services:
code:
image: custom/backend:latest
php-fpm:
image: php-fpm:latest
volumes_from:
- code
This is the workflow described here: https://docs.docker.com/engine/tutorials/dockervolumes/#/creating-and-mounting-a-data-volume-container
With v3, since volumes_from
is gone, I am unable to recreate this setup. What are my options? It was very easy to deploy new code by simply baking it into a new container. I have no idea how to solve this problem with v3.
I'm also a bit confused about that new syntax that comes with v3. Since the "volumes_from"-option is gone you have to creat shared "master" volumes. You could try it like this:
version: '3'
services:
code:
image: custom/backend:latest
volumes:
- codevolume:/var/www/src
php-fpm:
image: php-fpm:latest
volumes:
- codevolume:/var/www/src
volumes:
codevolume:
If I understand everything correctly, the .yml above should do the trick.
Feedback is greatly appreciated (:
Regards
This is what I've found out so far.
But how do you specify the contents of codevolume
? If i use codevolume:/var/www/src
the code I'd like to share between containers located in /var/www/src
gets overlayed and is not available anymore.
You're looking for the mountpoint on your host? You can locate it as follows:
docker volume ls
docker volume inspect ${NAME}
First, you get the volume-id via docker volume ls
and then you inspect the volume to get the mountpoint docker volume inspect ${NAME}
Hope that helps.
In this case I'll get the created volumes, this is correct. But I need to access the code inside my container. I don't know if this is still technically called a "volume".
I build my code container with a Dockerfile
like this one:
FROM baseimage
ADD ./all-my-source /app
VOLUME /app
CMD ["/bin/true"]
This container registers the contents under /app
as a volume. With v2 I could now use volumes_from: code
to mount the /app
directory of this container in any other container and share the source code this way. This is what I'm trying to replicate.
With the global volumes
declaration in v3 I cannot reuse a volume from one container in another. I can only create a globally shared volume. What I need is to use the volume declared and populated during build from one container and make it available to all the others.
Regarding the volumes_from being erased from v3, I ran into a problem while using jrcs/letsencrypt-nginx-proxy-companion
image.
that script as you can see here:
https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion/blob/master/app/entrypoint.sh#L28
Relies on the Volumes-from to get the container id of the nginx-proxy
. How can I circumvent this on v3? any ideas? I can probably submit a pull request to https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion
@joantune It doesn't look smart enough to use volumes_from
just to find out proxy container id.
I believe better and proper way is to use Labels. Here is a little fix that should help you, however I didn't test container with this fix, but query string only: gist diff
You have to start proxy contaier with some label and then catch this container within your entrypoint by your label:
$ docker run -d -p 80:80 -p 443:443 \
--name nginx-proxy \
--label com.example.mylabel=nginx-proxy \
-v /path/to/certs:/etc/nginx/certs:ro \
-v /etc/nginx/vhost.d \
-v /usr/share/nginx/html \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
So, you can find this container later by Label:
cid = $(docker_api "/containers/json" | jq -r '.[] | select( .Labels["com.example.mylabel"] == "nginx-proxy")|.Id')
I have this issue too.
I have assets image that expose volume , and was using volume_from to import the asset's image volume into another image.
With the v3 format It does not seems to allow this kind of composition.
eg:
services:
assets:
image: any_asset_image
volumes:
- "/public/assets"
proxy:
image: nginx
volumes_from:
- assets
How can I acheive this ?
How can I acheive this ?
In your case it's easier, not exactly the same as my problem, but you can declare something like:
services:
assets:
image: any_asset_image
volumes:
- assets:"/public/assets"
proxy:
image: nginx
volumes:
- assets
volumes:
- assets
See: https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Ok so you mean that the content of /public/assets
within the assets
image will be available from the proxy
container at the /public/assets
path ?
I thought that
volumes:
- assets
Would create an empty asset data volume since it is not an existing host volume.
and
volumes:
- assets:"/public/assets"
Would mount that empty volume within the asset container at the /public/assets
path.
At least It is what I understood reading the docs.
Yet I use the assets
image to ship some source files to the proxy container.
I'll play with this example to see how it behaves.
Yes it seems to work thanks @joantune
Yet I feel like the documentation is not totaly helpfull.
I am wondering what I did not understood.
Actually @Electron-libre, when you asked does that mean that if I do X I'll get Y, I was doubting if it did that. It's not explicit on the documentation that the mounting directory will be the same as the one specified on the first "assets":/somedir
The documentation could have that stated explicitly. Also, what if the second container had a "assets":/someotherdir
I would expect it to override the /somedir
mount point onto /someotherdir
but I'm not sure. Yeah, Docker's
documentation is a bit hard, also, having docker-engine, docker-machine, docker-swarm, docker-compose without trying to make the options similar/the same across those, IMO, makes for a harder and steeper learning curve, and perhaps needlessly (or not really, and it makes all the sense in the world to do this like that, i.e. to break compatibility raising the bar of learning things because of engineering/design fundamental choices. Yet, I feel that it should be in the back of the mind of the engineers developing this that the closer the options are, the better).
Oh, and proper thanks to @nodekra for your suggestion, I'll probably do a PR with that on https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion. Thanks again & Cheers.
Guys, on a side note, I'm the occasional devops guy (I'm mostly a developer these days, but I came from my early teens from sysadmining) I have used and heard of several tools to do Devops these days:
Docker [and all the docker-ish projects] / Terraforma / Ansible / Kubernetes / Chef & Puppet [older (?) days], etc.
I kind of liked the way that Ansible tries to have a kind of Docker hub for its 'recipes'/cookbooks for common tasks.
I spent some hours doing a deployment using several docker containers and on the back of my mind I'm thinking that I'm reinventing the wheel for sure. Is there any global devops recipe kinda of thing for this? What I was trying to achieve was quite straight forward, and although it could be done only with Ansible, I think that a platform neutral space where one can share the way to achieve stuff would be useful, even if using different technologies.
Is there anything like that out there?
Indeed I had to tweak a bit you example here is a basic example:
.
├── assets.Dockerfile
├── docker-compose.yml
└── my_assets_file
#./my_assets_file
my assets content
#./assets.Dockerfile
FROM alpine
RUN mkdir /public
ADD my_assets_file /public/
#./docker-compose.yml
version: "3"
services:
assets:
image: my_assets
build:
context: .
dockerfile: assets.Dockerfile
volumes:
- assets:/public/
command: /bin/true
proxy:
image: alpine
volumes:
- assets:/public/
depends_on:
- assets
command: /bin/sh -c "cat /public/my_assets_file"
volumes:
assets:
Yet running docker-compose run proxy
outputs my assets content
I don't know/understand why it works.
But here is what it does :
docker volume inspect dockervolumestest_assets
[
{
"Driver": "local",
"Labels": {
"com.docker.compose.project": "dockervolumestest",
"com.docker.compose.volume": "assets"
},
"Mountpoint": "/home/cedric/Archives/docker/volumes/dockervolumestest_assets/_data",
"Name": "dockervolumestest_assets",
"Options": {},
"Scope": "local"
}
]
the volume section from proxy container:
"Mounts": [
{
"Type": "volume",
"Name": "dockervolumestest_assets",
"Source": "/home/cedric/Archives/docker/volumes/dockervolumestest_assets/_data",
"Destination": "/public",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
the volume part from assets container:
"Mounts": [
{
"Type": "volume",
"Name": "dockervolumestest_assets",
"Source": "/home/cedric/Archives/docker/volumes/dockervolumestest_assets/_data",
"Destination": "/public",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
Weird :D
@Electron-libre You don't need Compose v3 unless you use Swarm mode
and docker stack deploy -c docker-compose.yml myproject
. You can usecompose v2
or v 2.1
with latest docker engine as well for standalone node. But if you want to be ready for v3 - it's ok.
AFAIK, from some docker docks (I cannot remember exactly source) it works like this:
volumes: [assets]
. It will be same local mount as we used before with ./my_assets_file:/public
, but inside docker library directory /var/lib/docker/volumes/assets/
depends_on:
like you do it. So even if your proxy container consist any data it will be ignored since data-container started early and copied content into top level(host) volume.Another way is volume plugins, you can try local-persist
p.s.
sorry I cannot fine doc source for these and sorry for my "english"
I suggest to read this issue from github.com/docker/docker
#30441
Can somebody say why the volumes_from
option was removed? Named volume is not a solution, it's just a pain in the ass, because without a plugin the data is not persisted on the host.
@visualfanatic don't be so lazy and ask same question without reading another comments, it's all described in issue docker/docker #30441
from my previous comment. Also, as I told before - use compose version 22.1 with volumes_from
as you did it before, compose v3 is mainly for Swarm Mode
and stack deploy feature and orchestration where host volume and volumes_from
are "useless".
p.s.
and again: Compose v2 is not removed nor depricated
Hi,
I have a setup with the old swarm where i deploy a set of containers to run some functional tests. This is a legacy application and it's clustering mode requires a shared folder between 2 of the containers. As of today i use compose version 2 andvolumes_from
.
However i am also experimenting with the new swarm mode to see if i can do the same. I tried this:
service1:
....
volumes:
- shared-folder:/home.local/vusers
service2:
....
volumes:
- shared-folder:/home.local/vusers
volumes:
shared-folder:
When i deployed the stack service1 and service2 containers ended up on different swarm slaves and the volume is created on one of the swarm nodes only.
I was thinking that i need to use some kind of constraint to specify that service 1 and service2 must be placed on the same swarm node bit it seems that this is not possible the way i want it - i.e. start service1 and then start service2 on the same swarm node where service1 was started...)? Any other option?
@nodekra seems like lots of syntax is only being added to v3+ though? For instance :cached
in volumes.
Hi,
I have run into the same troubles with named volumes when used as a replacement for volumes-from. Currently I am using docker-compose down --volumes
as a workaround.
From docker-compose down man page:
-v, --volumes Remove named volumes declared in the `volumes` section of the Compose file and anonymous volumes attached to containers.
Another (may be better) approach could be using the Long syntax in version 3.2 like nocopy: true
or type: bind
. I will play with it some day later.
Just want to share my experience. I managed to somehow to share data volumes between containers with some clues from this thread. But then I was stuck by another problem: Docker compose volumes not updating after rebuild. Relevant discussions: here and here.
Little background:
I have containers with volumes containing static data (progressive webapps).
To create these containers/volumes additional build step is needed (webpack).
The 'trick' to overcome problem with volumes not updated after rebuild is to copy all volumes with static content to some common container (nginx) proxy. Then they can be used in other containers with option volumes-from
or also with docker-compose
.
Example of building static progressive webpack with docker multistage build:
Dockerfile
# https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
###########################
# Build stage
###########################
FROM node:6-alpine
# Install build deps
RUN apk update && \
apk upgrade && \
apk add --no-cache git
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
ADD . /opt/app
RUN npm run build
###########################
# Second stage
###########################
FROM node:6-alpine
RUN mkdir -p /usr/share/nginx/html
COPY --from=0 /opt/app/dist /usr/share/nginx/html
VOLUME /usr/share/nginx/html
Now we create container with volume containing static resources built in previous step:
docker build -t myapp-volume .
docker run --name myapp_volume -P -d myapp-volume
If needed app can be served standalone using nginx
container:
docker create \
--publish 3003:80 \
--name myapp \
--volumes-from myapp_volume \
nginx:alpine
docker start myapp
md5-aafb357529ea196313a66a4dc497b206
# Adopt version 3 syntax:
# https://docs.docker.com/compose/compose-file/#/versioning
version: '3'
volumes:
database_data:
driver: local
services:
###########################
# Setup the Proxy container
###########################
proxy:
image: proxy
build:
context: .
dockerfile: Dockerfile.proxy
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
md5-b0d7c9db52ede22d193ae473eddbd736
FROM myapp-volume
FROM anothermyapp-volume
FROM nginx:alpine
RUN mkdir -p /opt/myapp
RUN mkdir -p /opt/anothermyapp
COPY --from=0 /usr/share/nginx/html /opt/myapp
COPY --from=1 /usr/share/nginx/html /opt/anothermyapp
md5-804dd51bcd07f896b6ed01719876ea1e
server {
listen *:80 default_server;
location /myapp {
alias /opt/myapp;
}
location /anothermyapp {
alias /opt/anothermyapp;
}
}
It is confusing to have a newer version of a spec that is NOT designed to be used in certain situations. It would make more sense, IMO, to add this support to v3.
I'm not sure if that helps, but it could be a solution to the problem. You can specify the volume type as 'bind' and thereby mount a directory from the host computer within your docker containers.
version: '3.2'
services:
live:
image: myimage:0.1
command: npm start
volumes:
- type: bind
source: ./
target: /usr/app
depends_on:
- projectfiles
projectfiles:
build: ./
image: myimage:0.1
command: npm install
volumes:
- type: bind
source: ./
target: /usr/app
Anyone found a solution for using docker-letsencrypt-nginx-proxy-companion with v3? This made me feel like an idiot for the past hour and I think I'll just go back to v2 now.
@bauerd I have the same problem using v3. For now best using v2 if you want to solve this problem.
It's not really an exact solution for your problem, but I use https://github.com/SteveLTN/https-portal as proxy for letsencrypt certs.
Isn't it strange that this use case of deploying versioned data as a docker image with a VOLUME
instruction, part of a stack managed by docker-compose or docker-stack, is no longer supported? To me it seems like an obvious, basic use case for docker, dockerhub, docker-compose, and docker stack. I don't even want to run code, I just want versioned data available to my application containers, managed the same way as the rest of my stack.
I can do this with a FROM scratch
image and plain docker
commands, no problem. Try this:
$ mkdir data
$ touch data/example
$ cat <<EOT > Dockerfile
> FROM scratch
>
> WORKDIR /some-data
> COPY ./data/* ./
>
> VOLUME /some-data
>
> CMD ["true"]
> EOT
$ docker build -t local/some-data:0.0.1 .
...snip...
$ docker create --name some-data local/some-data:0.0.1
$ docker run --volumes-from some-data alpine:3.5 ls /some-data/example
/some-data/example
I can also configure this setup in compose format v2. But there appears to be no way to do this in compose format v3 and using docker stack
.
I've seen it said a few times that named volumes "replace" volumes_from
, but I don't see any way to use named volumes to deploy data as a versioned artifact as I have done above. I want to be able to pull some-data:0.0.3
when I do docker stack deploy
, and have all my other images see the new version of the data in that image.
Right now I have this static, versioned data in a git repository and am using custom processes to automate pulling and checking out a git ref to deploy. It should be docker stack deploy
, just like any other part of my stack.
If it's just about not having to repeat the same volumes on two services, you could also use YAML repeated nodes. E.g.:
version: '3'
services:
code:
image: custom/backend:latest
volumes: &mySharedVolumes
- type: bind
source: /tmp/backend_data/vpn_configs
target: /vpn_configs
php-fpm:
image: php-fpm:latest
volumes: *mySharedVolumes
@tsauerwein It's not about DRY, it's about publishing and versioning. There appears to be no way to either version or publish named volumes. As I've read named volumes "replace" --volumes-from
, I'd expect to have feature parity... right? That doesn't appear to be true, and I haven't yet found an announcement anywhere that this use case was dropped.
In light of the quietly lost functionality since v2, does docker-compose/docker-stack intentionally not support the use case of deploying versioned data from a registry as a container? Or was it an oversight that this is not supported?
I _really_ want to standardize my projects to use Docker Swarm and Stack. However, as far as I can tell, any projects with a DVC use case can't use our standard docker project structure and have to exist as outliers.
can you help me upgrade to compose v3
fingerboard:
image: xivoxc/fingerboard:${XIVOCC_TAG}.${XIVOCC_DIST}
restart: always
nginx:
image: xivoxc/xivoxc_nginx:${XIVOCC_TAG}.${XIVOCC_DIST}
ports:
links:
extra_hosts:
volumes_from:
volumes:
restart: always
Another temporary solution:
docker-compose down -v
docker-compose up --build -d
docker cp "${COMPOSE_PROJECT_NAME}_web_1:/staticfiles/" ./staticfiles/
docker cp ./staticfiles/. "${COMPOSE_PROJECT_NAME}_proxy_1:/staticfiles/"
rm -rf ./staticfiles/
Please return back volumes_from
, because docker
still supports it.
It seems to me like all of these issues stem from people misunderstanding the purpose of volumes. They are meant to hold persistent data between builds, not static (versioned) data. Static data should be built into the image, not mounted in a (writeable!) volume on the container. When a swarm manager is creating and destroying containers on different nodes left and right, it depends on everything you need being baked into the (immutable) image, not dependent on an external volume.
Hi @hackel. Thank you for the explanation. My use case is actually a bit different. I need a _scratch_ area that is _shared_ between containers, no persistence required. Do you know what's the best way to do this just with docker and not requiring AWS/GCP persistent volumes? Thanks!
@hackel I think you're misunderstanding the use case.
Static data should be built into the image, not mounted in a (writeable!) volume on the container.
What if you want your static, versioned, _published_ (nobody said anything about writeable) data image to be independent of your app image(s)? If you have data v2.4 and you want your app v1.1 to access it, you should be able to mount in that exact data version from a live container.
With a setup that allows data and apps to be independently versioned, you should be able to upgrade the app to v1.2 and continue to mount data v2.4. Then upgrade the data to v2.5 without changing the version of the app. I shouldn't have to version my app and data in lockstep just because I'm using docker-compose v3 or Docker Swarm to deploy them. And what if I have a large stack where many containers need to share the same versioned data? I would strongly prefer not to duplicate the same data over many images for no reason other than working around docker-compose. I would also strongly prefer not to use a mechanism external to Docker such as Git to manage the deployment of the data. Docker fully supports this (--volumes-from
). Why can't compose v3?
In docker-compose format v2 this is all easily declared by changing your docker-compose file (volumes-from
key).
Can one do anything like this in compose file format v3? I believe it's not possible.
@MattF-NSIDC's use case is exactly why stack services need to share data without having to build, publish and update multiple images and services even for a minor data update.
Another example is a nginx reverse proxy with a php backend service where the proxy that needs to read backend's data in a read only mode.
I think the proxy should not be coupled to the app, making the maintenance harder for both services and breaking the single responsibility principle.
some working shit
version: '3.4'
services:
nginx:
image: nginx
ports:
- 8082:80
volumes:
- type: volume
source: data2
target: /usr/share/nginx/html
read_only: true
volume:
nocopy: true
networks:
vtest:
aliases:
- ng
d:
command: "sh -c 'rm -rf /var/www/html2/* && cp -ra /var/www/html/* /var/www/html2/ && trap : TERM INT; (while true; do sleep 1000; done) & wait'"
#TODO more elegant copy like copy with mv
image: <SOME DATA IMAGE>:TAG1 # change tag and redeploy stack
volumes:
- data2:/var/www/html2
networks:
- vtest
volumes:
data2:
driver: local
networks:
vtest:
driver: overlay
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not recent activity during the stale period.
Most helpful comment
In this case I'll get the created volumes, this is correct. But I need to access the code inside my container. I don't know if this is still technically called a "volume".
I build my code container with a
Dockerfile
like this one:This container registers the contents under
/app
as a volume. With v2 I could now usevolumes_from: code
to mount the/app
directory of this container in any other container and share the source code this way. This is what I'm trying to replicate.With the global
volumes
declaration in v3 I cannot reuse a volume from one container in another. I can only create a globally shared volume. What I need is to use the volume declared and populated during build from one container and make it available to all the others.