If I have an image that defines a volume, the container ends up with an anonymous docker volume. This leads to issues like #4476 where I can't add volumes_from:
if I've previously forgotten it.
Another use case I've encountered is when you have an image that populates a volume:
FROM base/almost_empty:1.0.0
ENV AIRFLOW_HOME /usr/local/airflow
COPY ./foo ${AIRFLOW_HOME}/stuff/foo/
VOLUME ${AIRFLOW_HOME}/stuff
In this case, when I update to a newer version of my image, I want a new volume with the latest version of foo
.
At the moment, I can do docker-compose down && docker-compose up
, but this recreates every single container in my docker-compose.yml.
Alternatively, I can do docker-compose rm xyz && docker-compose up
. However assumes that I know which containers to run docker-compose rm
on.
Would you be willing to add a --force-recreate-volumes
argument? This would be a huge help for these use cases.
In this case, when I update to a newer version of my image, I want a new volume with the latest version of foo.
Is there a specific reason why this needs to be a volume then? It doesn't sound like you need that data to persist.
I would like to chip in here too. I'm using gitlab to build an image that contains the application source when the source changes. then it deploys that to my swarm. The data in the volume is therefore not required to be persisted and the primary reason I use a named volume is so nginx and php-fpm service can get to it...
So I primarily use a named volume for deployment reasons. No need to ssh into the host and fill a directory with source. A --force-recreate-volumes
would be helpful here. Since I'm not very experienced with docker I'm open to alternatives.
Is there a specific reason why this needs to be a volume then? It doesn't sound like you need that data to persist.
In this specific case, a user had an elaborate setup where one container would prepopulate a volume with some generated data, so other containers could access it.
In the general case, we have a tool that runs docker-compose to automatically deploy containers across a series of machines. If we make a mistake in our dev environment and use the wrong mount, we have to manually intervene to fix the situation.
@Wilfred @basz Thanks for the feedback! Does it make sense to consolidate with the discussion in #4337 (notably from this comment onward)?
@shin- #4337 is definitely similar, but it seems to focus on local development of projects deliberately using node_modules
as a mount. I'm interested in a general way of opting out from volumes being preserved, so I can run a docker-compose command against a remote machine idempotently.
I think it would definitely be possible to solve this in a way that also works for #4337, so whichever suits your workflow best :)
@shin- why did you close this issue? I cannot see how the use case @basz provided is addressed by #4337
It was closed because the issue was resolved by #5596
@shin- I saw those references to issues related to anonymous volumes. The use case of @basz however is not about anonymous volumes so it is clearly not resolved by that merge request at all.
It is still possible that @basz (and me) is doing something completely wrong but as nobody (including you) has said so, the use case seems valid for the context of this issue.
The tl;dr of that use-case/feature request after #5596 got merged would be:
Add option to force recreation of all attached volumes and not only anonymous ones
This issue is not resolved. When you use the local storage driver to mount NFS shares, it creates a volume for it. If you change the driver_opts
or other properties of the volume, docker-compose up
fails because the volume already exists.
In reality, we want the volume to be recreated because the device options changed. Even though it's using the local driver, this volume contains no data, and instead is just a container for a mounted NFS directory.
Example:
version: '3.7'
services:
test:
image: alpine
user: 1000:1000
network_mode: none
volumes:
- test_volume:/data
volumes:
test_volume:
driver_opts:
type: nfs
o: addr=192.168.1.51,vers=3
device: :/volume2/stuff
If I change options, or the path of the mount, I get a failure:
ERROR: Configuration for volume test_volume specifies "device" driver_opt :/volume2/Plex Media, but a volume with the same name uses a different "device" driver_opt (:/volume2/nextcloud). If you wish to use the new configuration, please remove the existing volume "test-docker_test_volume" first:
$ docker volume rm test-docker_test_volume
In these cases, I would like the ability to specify when doing docker-compose up
that I want to recreate those volumes. Even better, would the the added ability in the YML file to specify whether or not a volume is volatile. A volatile volume would be recreated any time up
is performed:
volumes:
test_volume:
volatile: true
driver_opts:
type: nfs
o: addr=192.168.1.51,vers=3
device: :/volume2/stuff
@shin-, please reconsider!
Please consider reopening this issue. Such "volatile" option for a volume would do the trick.
Today I ran into exactly the NFS volumes issue described here, it took us quite a while to figure out why the container wasn't writing where we thought, and it was because there was an old volume named the same thing as what we were trying to compose up. This could have been quite a lot worse than it ended up being, and it took us by surprise.
I just want to register my vote for reopening this. 馃憤
I ran into this issue recently and it was not immediately obvious that my container was reusing an old definition of my volumes despite having it changed. It would be good to force recreate volumes when composing up.
docker-compose build --no-cache && docker-compose up
did the trick for me
Most helpful comment
This issue is not resolved. When you use the local storage driver to mount NFS shares, it creates a volume for it. If you change the
driver_opts
or other properties of the volume,docker-compose up
fails because the volume already exists.In reality, we want the volume to be recreated because the device options changed. Even though it's using the local driver, this volume contains no data, and instead is just a container for a mounted NFS directory.
Example:
If I change options, or the path of the mount, I get a failure:
In these cases, I would like the ability to specify when doing
docker-compose up
that I want to recreate those volumes. Even better, would the the added ability in the YML file to specify whether or not a volume is volatile. A volatile volume would be recreated any timeup
is performed:@shin-, please reconsider!