Running one-off commands with docker-compose does not delete volumes used by the container. This is different than docker run --rm
which does remove volumes after the container is deleted.
:+1:
We also ran into this.
What's very strange is that we had a cronjob running for months without any problems, but now every docker-compose run --rm
leaves behind a volume with ~200 MB.
I am not 100% sure if this is related to an upgrade from 1.2.0 to 1.5.2, but it's the only change we've made.
docker info
Containers: 14
Images: 198
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 506
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-38-generic
Operating System: Ubuntu 14.10
CPUs: 2
Total Memory: 7.798 GiB
Name: ro109
ID: 4I66:JXGX:AAGY:ABRV:X7FD:ADPZ:IPHK:42BS:EUA2:QDRD:CJI3:EX6U
WARNING: No swap limit support
Version: 1.8.3
API version: 1.20
Go version: go1.4.2
Git commit: f4bf5c7
Built: Mon Oct 12 18:01:15 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.8.3
API version: 1.20
Go version: go1.4.2
Git commit: f4bf5c7
Built: Mon Oct 12 18:01:15 UTC 2015
OS/Arch: linux/amd64
docker-compose version
docker-compose version 1.5.2, build 7240ff3
docker-py version: 1.5.0
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
CC: @Quexer69
That's strange, we've never set v=True
for this rm
, and I don't think that it's ever defaulted to false, so I'm not sure how the volumes were being removed before.
After digging through some code, I think it's very likely that someone ran also https://github.com/chadoe/docker-cleanup-volumes in a cronjob on the server.
Is there another way to remove the volumes from a docker-compose run
?
Or would you recommend docker exec
?
It seems a bit strange that a one-off container would use a volume and just remove it immediately. I would think that it would easier use a host volume, or a named volume that stays around.
Is the volume just being created because the image has a volume in it?
docker exec
would be one way around that. Implementing this feature would also solve it. I believe it's a small change to run_one_off_container()
by passing v=True
to project.client.remove_container()
.
This is a fundamental docker behaviour - if the image has volumes in it, they will created whenever you run a container from that image.
It's common to run one off tasks from an image that has volumes you may never use for that task.
The docker-compose run --rm behaviour should mirror the docker run --rm behaviour, which does correctly remove volumes.
@dnephin Our use-case is a rather simple web-app. We usually have web
(nginx), php
and worker
(same image as php
).
We're sharing static-files (assets) between web
and php
, but in other scenarios we also need to share files between php
and worker
.
The volume is defined in docker-compose.yml
- sure there could be optimizations how much files are shared, but it still looks like a general problem to me.
docker exec
could be a workaround, but I'd like to stay with docker-compose
and I would rather not rely on the fact that my container is named project_php_1
because it may be named project_php_2
in some cases.
I also noticed that we ran a cleanup script before, I had to reactivate/fix that, but having an option to remove volumes after docker-compose run
would still be great.
I need to look into named volumes a bit more I think, basically all apps we run, are running on a swarm, so I need to configure that properly.
I also have this problem. I am using Docker Compose 1.7.1
same exact use case as @schmunk42. we use named volumes. docker-compose down
does not remove the named volumes, which is preventing us from doing a complete clean up of artifacts on build. We are can workarounds like docker volume ls | grep myassets
, but that is unmaintainable in many ways.
You can work around this issue in Docker Compose 1.7 as follows:
docker-compose run xxx
docker-compose down -v
The key here is to not use the --rm
flag on the run command. Because Docker Compose 1.7 removes containers started with the run command, it cleans up everything correctly including all volumes (as long as you use -v
)
@schmunk42
docker-compose down -v
always removes named local volumes (1.6+), the current issue relates to volumes defined in the image of the service you are running that are automatically created not declared as explicit volumes in your docker-compose.yml
.
For example given the following docker-compose.yml
file:
version: '2'
volumes:
mysql_run:
driver: local
services:
db:
image: mysql:5.6
volumes:
- mysql_run:/var/run/mysqld
If we use docker-compose run --rm
:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose run --rm db
Creating volume "tmp_mysql_run" with local driver
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
$ docker-compose down -v
Removing network tmp_default
Removing volume tmp_mysql_run
$ docker volume ls
DRIVER VOLUME NAME
local a79c78267ed6907afb3e6fc5d4877c160b3723551f499a3da15b13b685523c69
Notice that the volume tmp_mysql_run
is created and destroyed correctly, but we get an orphaned volume which is the /var/lib/mysql
volume in the mysql image.
If we use docker-compose run
instead:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose run db
Creating volume "tmp_mysql_run" with local driver
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
$ docker-compose down -v
Removing tmp_db_run_1 ... done
Removing network tmp_default
Removing volume tmp_mysql_run
$ docker volume ls
DRIVER VOLUME NAME
Everything is cleaned up correctly...
Can this be closed now?
Is it fixed? My PR is still open.
I also don't understand this issue. Using docker-compose down
is not an option if we are running a service stack in the same name space as the "one off" command we want to clean up?
docker-compose down -v
would bring down the service stack and delete named volumes. I want neither to happen.
Pretty sure this issue was about run --rm
deleting unnamed volumes, which indeed now seems to be fixed.
It's not fixed in 1.10, I just tried it, and it doesn't look like it's fixed in master either (v=true is still missing): https://github.com/docker/compose/blob/master/compose/cli/main.py#L985
The issue is that --rm does not delete unnamed volumes. It should.
Most helpful comment
This is a fundamental docker behaviour - if the image has volumes in it, they will created whenever you run a container from that image.
It's common to run one off tasks from an image that has volumes you may never use for that task.
The docker-compose run --rm behaviour should mirror the docker run --rm behaviour, which does correctly remove volumes.