Compose: Compose with Swarm can't locate named volumes

Created on 18 Feb 2016  路  36Comments  路  Source: docker/compose

I鈥檓 having trouble running simple compose task on Swarm:

version: "2"
services:
  elastic-01:
    image: elasticsearch:2
    environment:
      SERVICE_NAME: elastic
    ports:
      - 9200:9200
    volume_driver: flocker
    volumes:
      - 'data_01:/usr/share/elasticsearch/data'

  elastic-02:
    image: elasticsearch:2
    ports:
      - 9201:9200
    volume_driver: flocker
    volumes:
      - 'data_02:/usr/share/elasticsearch/data'
    command: elasticsearch --discovery.zen.ping.unicast.hosts=elastic-01

volumes:
  data_01:
    external:
      name: "es_data_01"

  data_02:
    external:
      name: "es_data_02禄

Running docker-compose up I receiving following error:

eric@iMac-Eric /V/D/W/s/a/elk-swarm> docker-compose up
ERROR: Volume es_data_01 declared as external, but could not be found. Please create the volume manually using `docker volume create --name=es_data_01` and try again.
eric@iMac-Eric /V/D/W/s/a/elk-swarm> 

At the same time time, docker command works ok:

eric@iMac-Eric /V/D/W/s/a/elk-swarm> docker run -it --rm --volume-driver flocker -v es_data_01:/data ubuntu
root@96b0c807c46f:/# ls /data
elasticsearch  test1
root@96b0c807c46f:/# exit
eric@iMac-Eric /V/D/W/s/a/elk-swarm> 

Also, here is output from volume list:

eric@iMac-Eric /V/D/W/s/a/elk-swarm> docker volume ls
DRIVER              VOLUME NAME
local               swarm-node-05a.cybertonica.aws/3f7c3fb82a73f539f318e14b3f260b3cc32d50836b544d5db4572d202366d16c
flocker             es_data_02
flocker             es_data_01
local               swarm-node-06a.cybertonica.aws/a5a94eb763e09f6cf49a2b95dddbd7351a1e1074a690b3553311690120a4dc18
flocker             es_data_01
flocker             es_data_02
eric@iMac-Eric /V/D/W/s/a/elk-swarm> 

If I run docker-compose against any single docker node, everything works ok too.

What am I doing wrong here?

arevolumes kinbug stale swarm

Most helpful comment

@gittycat yes. as I suspected, there are issues with local named volumes as well.
My patch for the Swarm is related only to non-local volumes.

I think we need to open a separate issue about local volumes and swarm.

TBH, there are caveats in how Swarm treats volumes on docker engines.
(all examples below are run agains a swarm manager)
1) docker volume ls:

DRIVER              VOLUME NAME

No Volumes yet.
2) docker volume create --name=v1

DRIVER              VOLUME NAME
local               swarm-node-06a.cybertonica.aws/v1
local               swarm-node-05a.cybertonica.aws/v1

By default, docker volume create creates a volume on _all_ engines.
3) docker volume create --name=swarm-node-06a.cybertonica.aws/v2

DRIVER              VOLUME NAME
local               swarm-node-06a.cybertonica.aws/v1
local               swarm-node-05a.cybertonica.aws/v1
local               swarm-node-06a.cybertonica.aws/v2

Here we explicitly tell where to create a volume.
Now let鈥檚 try to get info about our volumes:
4) docker volume inspect v2

[
    {
        "Name": "v2",
        "Driver": "local",
        "Mountpoint": "/var/lib/docker/volumes/v2/_data"
    }
]

5) docker volume inspect v1

[]
Error: No such volume: v1

Why? The problem here is that really we have 2 different volumes named v1 - v1 on node swarm-05a and on node swarm-06a. So swarm can't determine which one we need and therefore returns nil.
But!
6) docker volume inspect swarm-node-05a.cybertonica.aws/v1

[
    {
        "Name": "v1",
        "Driver": "local",
        "Mountpoint": "/var/lib/docker/volumes/v1/_data"
    }
]

Now back to docker-compose.
In version 2, looks like it tries to locate volume by its name, and fails as there several volumes with that name. And its quite logical, since Swarm can鈥檛 figure out which volume should be attached to your container.
Naive solution for this issue would be to add qualified volume names support in docker-compose, but, unfortunately, they are not allowed yet:

version: '2'

services:
  app:
    image: alpine:3.3
    volumes:
      - vault:/secrets
    command: ls -al /secrets

volumes:
  vault:
    external:
      name: "swarm-node-05a.cybertonica.aws/v1禄

And running it:

eric@iMac-Eric /V/D/W/g/s/c/p/d/t2> docker-compose up 
Recreating t2_app_1
ERROR: 500 Internal Server Error: create swarm-node-05a.cybertonica.aws/v1: volume name invalid: "swarm-node-05a.cybertonica.aws/v1" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed
eric@iMac-Eric /V/D/W/g/s/c/p/d/t2> 

All 36 comments

I think we overlooked volume_driver in the 1.6.0 release. I believe we need to remove that field from the v2 config.

The right place to specify the driver is in the volumes: top level section where you specify external and the name: https://docs.docker.com/compose/compose-file/#volume-configuration-reference

Also need to update the docs here: https://docs.docker.com/compose/compose-file/#volumes-volume-driver

Docs say 'external' can't be used together with 'driver'. Is it so?
What will be correct docker-compose.yml to use externally created flocker volumes?

Really, the first thing I've tried, was to let swarm/compose manage volumes for me (create new one, if one is missing), but it looks like compose tries to create volume on each 'up' run, so I've decided to start with pre-created volumes

I think, there can be some issue with locating non-local volumes (they don't have prefix).

hmm, yes external would be the right way to handle that. Compose should query docker to find that driver.

Does docker volume inspect es_data_01 work? I guess it would ?

If it does, please include the output of docker-compose --verbose up. It may be we have a second bug here as well.

It looks interesting:

eric@iMac-Eric /V/D/W/g/s/aws-zmq-consumer> docker volume ls
DRIVER              VOLUME NAME
local               swarm-node-05a.cybertonica.aws/3f7c3fb82a73f539f318e14b3f260b3cc32d50836b544d5db4572d202366d16c
flocker             es_data_02
flocker             es_data_01
local               swarm-node-06a.cybertonica.aws/a5a94eb763e09f6cf49a2b95dddbd7351a1e1074a690b3553311690120a4dc18
flocker             es_data_01
flocker             es_data_02
eric@iMac-Eric /V/D/W/g/s/aws-zmq-consumer> docker volume inspect es_data_01
[]
Error: No such volume: es_data_01
eric@iMac-Eric /V/D/W/g/s/aws-zmq-consumer> docker volume inspect es_data_02
[]
Error: No such volume: es_data_02
eric@iMac-Eric /V/D/W/g/s/aws-zmq-consumer> docker volume inspect swarm-node-06a.cybertonica.aws/es_data_01
[
    {
        "Name": "es_data_01",
        "Driver": "flocker",
        "Mountpoint": ""
    }
]
eric@iMac-Eric /V/D/W/g/s/aws-zmq-consumer> 

And here is output from docker-compose up:

eric@iMac-Eric /V/D/W/s/a/elk-swarm> docker-compose --verbose up -d
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.auth.load_config: Found 'auths' section
docker.auth.auth.parse_auth: Found entry (registry=u'https://362673073324.dkr.ecr.us-east-1.amazonaws.com', username=u'AWS')
compose.cli.command.get_client: docker-compose version 1.6.0, build unknown
docker-py version: 1.7.0
CPython version: 2.7.10
OpenSSL version: OpenSSL 0.9.8zg 14 July 2015
compose.cli.command.get_client: Docker base_url: https://swarm-a.cybertonica.aws:4000
compose.cli.command.get_client: Docker version: KernelVersion=4.2.0-27-generic, Os=linux, BuildTime=Wed Feb 17 22:45:35 UTC 2016, ApiVersion=1.22, Version=swarm/1.1.1, GitCommit=39ca8e9, Arch=amd64, GoVersion=go1.5.3
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- (u'elkswarm_default')
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {u'Containers': {},
 u'Driver': u'overlay',
 u'Engine': {u'Addr': u'swarm-node-06a.cybertonica.aws:2376',
             u'Cpus': 2,
             u'ID': u'X5ZZ:H62N:ZI5Y:AQ6A:65M7:BOX3:XCDK:NL6T:P7Y5:PWZU:RDEP:WLM4',
             u'IP': u'172.31.15.91',
             u'Labels': {u'executiondriver': u'native-0.2',
                         u'kernelversion': u'4.2.0-27-generic',
                         u'operatingsystem': u'Ubuntu 14.04.4 LTS',
                         u'storagedriver': u'aufs',
...
compose.volume.initialize: Volume data_01 declared as external. No new volume will be created.
compose.cli.verbose_proxy.proxy_callable: docker inspect_volume <- ('es_data_01')
ERROR: compose.cli.main.main: Volume es_data_01 declared as external, but could not be found. Please create the volume manually using `docker volume create --name=es_data_01` and try again.
eric@iMac-Eric /V/D/W/s/a/elk-swarm> 

I created #2960 for the first issue I noticed, so we can keep this one for the bigger issue of flocker on swarm.

That inspect is interesting. This might just be the way swarm and volume plugins interact? I know that swarm adds that node prefix to container names as well. Or maybe it's a bug in swarm? I'll see what I can find out.

cc @vieux

Just a quick note - if I switch to V1 like that:

elastic-01:
  image: elasticsearch:2
  environment:
    SERVICE_NAME: elastic
  ports:
    - 9200:9200
  volume_driver: flocker
  volumes:
    - 'es_data_01:/usr/share/elasticsearch/data'

elastic-02:
  image: elasticsearch:2
  ports:
    - 9201:9200
  volume_driver: flocker
  volumes:
    - 'es_data_02:/usr/share/elasticsearch/data'

Containers are stared ok with volume correctly mounted.
But in that case I鈥檓 lacking nice network features...

So I think with V1 it's using a very different API call.

Is that actually re-using the volumes, or is it just implicitly creating a new volume on each node?

It correctly attaches existing volumes. Here is output from docker-compose --verbose up:

eric@iMac-Eric /V/D/W/s/a/elk-swarm> docker-compose --verbose up
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.auth.load_config: Found 'auths' section
docker.auth.auth.parse_auth: Found entry (registry=u'https://362673073324.dkr.ecr.us-east-1.amazonaws.com', username=u'AWS')
compose.cli.command.get_client: docker-compose version 1.6.0, build unknown
docker-py version: 1.7.0
CPython version: 2.7.10
OpenSSL version: OpenSSL 0.9.8zg 14 July 2015
compose.cli.command.get_client: Docker base_url: https://swarm-a.cybertonica.aws:4000
compose.cli.command.get_client: Docker version: KernelVersion=4.2.0-27-generic, Os=linux, BuildTime=Thu Feb 18 08:28:32 UTC 2016, ApiVersion=1.22, Version=swarm/1.1.2, GitCommit=f947993, Arch=amd64, GoVersion=go1.5.3
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.service=elastic-01', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.service=elastic-02', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.service=elastic-01', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.service=elastic-02', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('elasticsearch:2')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
 u'Author': u'',
 u'Comment': u'',
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.service=elastic-01', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('elasticsearch:2')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
 u'Author': u'',
 u'Comment': u'',
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.service.build_container_labels: Added config hash: 450e384653b9b2126fa9249690dc563ba62c9b6e5f05e8e01c60785eebf7ed85
compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (cap_add=None, links=[], dns_search=None, pid_mode=None, log_config={'Type': u'', 'Config': {}}, cpu_quota=None, read_only=None, dns=None, volumes_from=[], port_bindings={'9200': ['9200']}, security_opt=None, extra_hosts=None, cgroup_parent=None, network_mode=None, memswap_limit=None, restart_policy=None, devices=None, privileged=False, binds=[u'es_data_01:/usr/share/elasticsearch/data:rw'], ipc_mode=None, mem_limit=None, cap_drop=None, ulimits=None)
compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [u'es_data_01:/usr/share/elasticsearch/data:rw'],
 'Links': [],
 'LogConfig': {'Config': {}, 'Type': u''},
 'NetworkMode': 'default',
 'PortBindings': {'9200/tcp': [{'HostIp': '', 'HostPort': '9200'}]},
 'VolumesFrom': []}
compose.service.create_container: Creating elkswarm_elastic-01_1
compose.cli.verbose_proxy.proxy_callable: docker create_container <- (name=u'elkswarm_elastic-01_1', image='elasticsearch:2', labels={u'com.docker.compose.service': u'elastic-01', u'com.docker.compose.project': u'elkswarm', u'com.docker.compose.config-hash': '450e384653b9b2126fa9249690dc563ba62c9b6e5f05e8e01c60785eebf7ed85', u'com.docker.compose.version': u'1.6.0', u'com.docker.compose.oneoff': u'False', u'com.docker.compose.container-number': '1'}, host_config={'NetworkMode': 'default', 'Links': [], 'PortBindings': {'9200/tcp': [{'HostPort': '9200', 'HostIp': ''}]}, 'Binds': [u'es_data_01:/usr/share/elasticsearch/data:rw'], 'LogConfig': {'Type': u'', 'Config': {}}, 'VolumesFrom': []}, environment={'SERVICE_NAME': 'elastic'}, volume_driver='flocker', volumes={u'/usr/share/elasticsearch/data': {}}, detach=True, ports=['9200'])
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {u'Id': u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05'}
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'',
 u'Args': [u'elasticsearch'],
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'SERVICE_NAME=elastic',
                      u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'',
 u'Args': [u'elasticsearch'],
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'SERVICE_NAME=elastic',
                      u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.cli.verbose_proxy.proxy_callable: docker attach <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05', stream=True, stderr=True, stdout=True)
compose.cli.verbose_proxy.proxy_callable: docker attach -> <generator object _multiplexed_response_stream_helper at 0x110c0a820>
compose.cli.verbose_proxy.proxy_callable: docker start <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05')
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('elasticsearch:2')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
 u'Author': u'',
 u'Comment': u'',
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.service=elastic-02', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('elasticsearch:2')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
 u'Author': u'',
 u'Comment': u'',
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.service.build_container_labels: Added config hash: decc2557281f1db11f205fbffc32cec0858c232acac827a8c651e78e8b09040c
compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (cap_add=None, links=[], dns_search=None, pid_mode=None, log_config={'Type': u'', 'Config': {}}, cpu_quota=None, read_only=None, dns=None, volumes_from=[], port_bindings={'9200': ['9201']}, security_opt=None, extra_hosts=None, cgroup_parent=None, network_mode=None, memswap_limit=None, restart_policy=None, devices=None, privileged=False, binds=[u'es_data_02:/usr/share/elasticsearch/data:rw'], ipc_mode=None, mem_limit=None, cap_drop=None, ulimits=None)
compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [u'es_data_02:/usr/share/elasticsearch/data:rw'],
 'Links': [],
 'LogConfig': {'Config': {}, 'Type': u''},
 'NetworkMode': 'default',
 'PortBindings': {'9200/tcp': [{'HostIp': '', 'HostPort': '9201'}]},
 'VolumesFrom': []}
compose.service.create_container: Creating elkswarm_elastic-02_1
compose.cli.verbose_proxy.proxy_callable: docker create_container <- (name=u'elkswarm_elastic-02_1', image='elasticsearch:2', labels={u'com.docker.compose.service': u'elastic-02', u'com.docker.compose.project': u'elkswarm', u'com.docker.compose.config-hash': 'decc2557281f1db11f205fbffc32cec0858c232acac827a8c651e78e8b09040c', u'com.docker.compose.version': u'1.6.0', u'com.docker.compose.oneoff': u'False', u'com.docker.compose.container-number': '1'}, host_config={'NetworkMode': 'default', 'Links': [], 'PortBindings': {'9200/tcp': [{'HostPort': '9201', 'HostIp': ''}]}, 'Binds': [u'es_data_02:/usr/share/elasticsearch/data:rw'], 'LogConfig': {'Type': u'', 'Config': {}}, 'VolumesFrom': []}, environment={}, volume_driver='flocker', volumes={u'/usr/share/elasticsearch/data': {}}, detach=True, ports=['9200'])
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {u'Id': u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48'}
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'',
 u'Args': [u'elasticsearch'],
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
                      u'LANG=C.UTF-8',
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'',
 u'Args': [u'elasticsearch'],
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
                      u'LANG=C.UTF-8',
...
compose.cli.verbose_proxy.proxy_callable: docker attach <- (u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48', stream=True, stderr=True, stdout=True)
compose.cli.verbose_proxy.proxy_callable: docker attach -> <generator object _multiplexed_response_stream_helper at 0x110c0a7d0>
compose.cli.verbose_proxy.proxy_callable: docker start <- (u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48')
compose.cli.verbose_proxy.proxy_callable: docker start -> None
Attaching to elkswarm_elastic-01_1, elkswarm_elastic-02_1
elastic-01_1 | [2016-02-19 18:47:16,963][INFO ][node                     ] [Magik] version[2.2.0], pid[1], build[8ff36d1/2016-01-27T13:32:39Z]
elastic-01_1 | [2016-02-19 18:47:16,964][INFO ][node                     ] [Magik] initializing ...
elastic-01_1 | [2016-02-19 18:47:17,534][INFO ][plugins                  ] [Magik] modules [lang-expression, lang-groovy], plugins [], sites []
elastic-01_1 | [2016-02-19 18:47:17,560][INFO ][env                      ] [Magik] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvdg)]], net usable_space [9.1gb], net total_space [9.7gb], spins? [possibly], types [ext4]
elastic-01_1 | [2016-02-19 18:47:17,560][INFO ][env                      ] [Magik] heap size [1007.3mb], compressed ordinary object pointers [true]
elastic-02_1 | [2016-02-19 18:47:18,833][INFO ][node                     ] [Achelous] version[2.2.0], pid[1], build[8ff36d1/2016-01-27T13:32:39Z]
elastic-02_1 | [2016-02-19 18:47:18,834][INFO ][node                     ] [Achelous] initializing ...
elastic-02_1 | [2016-02-19 18:47:19,416][INFO ][plugins                  ] [Achelous] modules [lang-expression, lang-groovy], plugins [], sites []
elastic-02_1 | [2016-02-19 18:47:19,445][INFO ][env                      ] [Achelous] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvdf)]], net usable_space [9.1gb], net total_space [9.7gb], spins? [possibly], types [ext4]
elastic-02_1 | [2016-02-19 18:47:19,445][INFO ][env                      ] [Achelous] heap size [1007.3mb], compressed ordinary object pointers [true]
elastic-01_1 | [2016-02-19 18:47:19,718][INFO ][node                     ] [Magik] initialized
elastic-01_1 | [2016-02-19 18:47:19,719][INFO ][node                     ] [Magik] starting ...
elastic-01_1 | [2016-02-19 18:47:19,841][INFO ][transport                ] [Magik] publish_address {172.17.0.3:9300}, bound_addresses {[::]:9300}
elastic-01_1 | [2016-02-19 18:47:19,861][INFO ][discovery                ] [Magik] elasticsearch/DQc19fJ7Qay9KZnNKgaPbw
elastic-02_1 | [2016-02-19 18:47:21,506][INFO ][node                     ] [Achelous] initialized
elastic-02_1 | [2016-02-19 18:47:21,506][INFO ][node                     ] [Achelous] starting ...
elastic-02_1 | [2016-02-19 18:47:21,661][INFO ][transport                ] [Achelous] publish_address {172.17.0.3:9300}, bound_addresses {[::]:9300}
elastic-02_1 | [2016-02-19 18:47:21,687][INFO ][discovery                ] [Achelous] elasticsearch/dMt8CsgpR3u0eps0JDZg_A
^CGracefully stopping... (press Ctrl+C again to force)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={u'label': [u'com.docker.compose.project=elkswarm', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 2 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'',
 u'Args': [u'elasticsearch'],
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
                      u'LANG=C.UTF-8',
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'',
 u'Args': [u'elasticsearch'],
 u'Config': {u'AttachStderr': False,
             u'AttachStdin': False,
             u'AttachStdout': False,
             u'Cmd': [u'elasticsearch'],
             u'Domainname': u'',
             u'Entrypoint': [u'/docker-entrypoint.sh'],
             u'Env': [u'SERVICE_NAME=elastic',
                      u'PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
Stopping elkswarm_elastic-02_1 ... 
Stopping elkswarm_elastic-01_1 ... 
compose.cli.verbose_proxy.proxy_callable: docker stop <- (u'da761a93b863b44fe86562cf5a4fa179bc3bd179cb888a2a023fc73b8fd80d48', timeout=10)
compose.cli.verbose_proxy.proxy_callable: docker stop <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05', timeout=10)
Stopping elkswarm_elastic-02_1 ... done
compose.cli.verbose_proxy.proxy_callable: docker stop -> None
compose.cli.verbose_proxy.proxy_callable: docker wait <- (u'292cadecddbf46729f3773515d5e9d4a9f8213fe13857d232f9a0db4f4465a05')
compose.cli.verbose_proxy.proxy_callable: docker wait -> 143
Stopping elkswarm_elastic-01_1 ... done
eric@iMac-Eric /V/D/W/s/a/elk-swarm> 

I've published my Ansible playbooks I use to setup Swarm, so it can be reproduced by others.
Here it is https://github.com/echupriyanov/ansible-swarm-aws

I;ve posted possible fix in Swarm repository: https://github.com/docker/swarm/issues/1847

With it applied, containers start as intended with correct volume mounts.
But I鈥檓 not sure if its a correct path to resolve it.

verified same issues

root@ip-172-31-4-101:/home/ubuntu# docker -l debug volume inspect flocker-vol
DEBU[0000] Trusting certs with subjects: [01UUCP Client Root CA 010UUCP Cluster Root CA]
[]
Error: No such volume: flocker-vol

Then

root@ip-172-31-4-101:/home/ubuntu# docker -l debug volume inspect ip-172-31-3-4/flocker-vol
DEBU[0000] Trusting certs with subjects: [01UUCP Client Root CA 010UUCP Cluster Root CA]
[
    {
        "Name": "flocker-vol",
        "Driver": "flocker",
        "Mountpoint": ""
    }
]
root@ip-172-31-4-101:/home/ubuntu# docker -l debug volume inspect ip-172-31-4-101/flocker-vol
DEBU[0000] Trusting certs with subjects: [01UUCP Client Root CA 010UUCP Cluster Root CA]
[
    {
        "Name": "flocker-vol",
        "Driver": "flocker",
        "Mountpoint": "/flocker/5b8f8c75-4c34-458e-8daa-61ae88763178"
    }
]

I'm not sure this is anything to do with flocker plugin, but rather the way swarm finds volumes

Changing this in compose.yml has no effect either.


volumes:
  test:
    external:
      name: ip-172-31-3-4/flocker-vol
root@ip-172-31-4-101:/home/ubuntu# docker-compose -f docker-compose.yml up -d
Creating ubuntu_redis1_1
ERROR: 500 Internal Server Error: create ip-172-31-3-4/flocker-vol: volume name invalid: "ip-172-31-3-4/flocker-vol" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed

hi, this is my first ever github step, so please, excuse me if something wrong.
It seems to me that I have a similar problem not related to Swarm, but using local volumes (I'm on OSX).
It seems that external container are not recognized at all in version 2 of docker-compose.
I have this situation:

giorgiossmbpssd:nodeDocker giorgioferraris$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                         PORTS               NAMES
229e29f8ec0b        postgres:9.4        "/docker-entrypoint.s"   About an hour ago   Exited (0) About an hour ago                       nodedocker_dbstore_1
7d2fe1cd5a39        postgres:9.4        "/docker-entrypoint.s"   6 days ago          Created                                            dbstore

so I have a data only volume container named dbstore.
If I use V1 of docker-compose yml, as in:

  dbpostgres:
     image: postgres:9.4 #define the image to get
     volumes_from:
       - dbstore
     ports:
       - "5432:5432"
  express-app-container:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - ./:/app
    links:
      - dbpostgres:postgresdb

things goes fine.

If I use a V2 of docker-compose:

version: '2'
services:
  dbpostgres:
     image: postgres:9.4 #define the image to get
     volumes_from:
       - dbstore
     ports:
       - "5432:5432"
  express-app-container:
    build: .
    image: express-app  #assign a name to the built image
    depends_on:
      - dbpostgres      #wait for dbpostgres to be started, not for ready
    ports:
      - "3000:3000"
    volumes:
      - ./:/app
    links:
      - dbpostgres:postgresdb

with a docker-compose up i get:
ERROR: Service "dbpostgres" mounts volumes from "dbstore", which is not the name of a service or container.

I have version 1.6 of docker-compose, i loaded all on my OSX using Docker Toolbox,
There is something wrong on my docker-compose.yml that you can explain to me?

thanks a lot

I'm not sure about volumes_from with external containers. But I would suggest not to use so-called data container to hold a volume. Instead, you can create named local volume with docker volume create command and use it as is without extra containers. In that case, you can reference this volume in volumes section of docker-compose.yml file, like this:

volumes:
  dbstore:
    external: true

Also, you don't need links in V2 config, IIRC

@giorgioongithub in V2 you must use container:dbstore https://docs.docker.com/compose/compose-file/#volumes-from

Hi,
thanks a lot, @dnephin, I today tried different permutation, but never the good one. :), I missed the container: ... So, now, using old style data container stuff works also with 2.0 version. I post here just if someone else cross my same problem: This works:

version: '2'
services:
  dbpostgres:
     image: postgres:9.4 #define the image to get
     volumes_from:
       - container:dbstore  #this is a container already defined
     ports:
       - "5432:5432"
  express-app-container:
    build: .
    image: express-app  #assign a name to the built image
    depends_on:
      - dbpostgres      #wait for dbpostgres to be started, not for ready
    ports:
      - "3000:3000"
    volumes:
      - ./:/app
    links:
      - dbpostgres:postgresdb

Tomorrow will try the volume way, as suggested by @echupriyanov . Thank to you also! Just one question: why you say I don't needs links in V2? I can't find any place saying that. Depends_on seem to me just defining ordering. Am I wrong?

@giorgioongithub regarding links, with V2 config, docker-compose creates a separate docker network, either default, or defined in networks config section. Doing that it turns on internal Docker DNS, so all containers in that network are addressible by their names, so no need for link settings.

More derails ca be found here Docker networking at the end of page.

@echupriyanov thanks a lot!! yes, after your input I found the place in the doc. It seems that the docker docs are a "mare magnum" where one can get lost.. I looked around for sometime without reading that :(

I'm easily hitting the same issue with a very simple test case (with the default plugin).

Start a swarm cluster with at least two nodes (I'm using amazonec2, docker 1.10.3, swarm 1.1.0, docker-compose 1.6.2)

docker volume create --name vault
then do
docker-compose up -d with the following
docker-compose.yml:

version: '2'

services:
  app:
    image: alpine:3.3
    volumes:
      - vault:/secrets
    command: ls -al /secrets

volumes:
  vault:
    external: true

This results in

ERROR: Volume vault declared as external, but could not be found. Please create the volume manually using `docker volume create --name=vault` and try again.

The volume is there though:
docker volume ls

DRIVER              VOLUME NAME
local               node01/vault
local               node02/vault

My fix for this issue with non-local volumes was merged into master: https://github.com/docker/swarm/pull/1872
Don鈥檛 know what the situation with local volumes, though

@echupriyanov
I'm still getting the error using a swarm image built from the latest master (commit 42b1620f4d34a88f785991fcb082caff46e47fde).

It's true that we're not very explicit about links not being necessary in V2.

We figured it wasn't a big issue, since it doesn't really matter whether you continue to use them or not, but perhaps we should make it clearer that you can remove them in many cases.

@gittycat yes. as I suspected, there are issues with local named volumes as well.
My patch for the Swarm is related only to non-local volumes.

I think we need to open a separate issue about local volumes and swarm.

TBH, there are caveats in how Swarm treats volumes on docker engines.
(all examples below are run agains a swarm manager)
1) docker volume ls:

DRIVER              VOLUME NAME

No Volumes yet.
2) docker volume create --name=v1

DRIVER              VOLUME NAME
local               swarm-node-06a.cybertonica.aws/v1
local               swarm-node-05a.cybertonica.aws/v1

By default, docker volume create creates a volume on _all_ engines.
3) docker volume create --name=swarm-node-06a.cybertonica.aws/v2

DRIVER              VOLUME NAME
local               swarm-node-06a.cybertonica.aws/v1
local               swarm-node-05a.cybertonica.aws/v1
local               swarm-node-06a.cybertonica.aws/v2

Here we explicitly tell where to create a volume.
Now let鈥檚 try to get info about our volumes:
4) docker volume inspect v2

[
    {
        "Name": "v2",
        "Driver": "local",
        "Mountpoint": "/var/lib/docker/volumes/v2/_data"
    }
]

5) docker volume inspect v1

[]
Error: No such volume: v1

Why? The problem here is that really we have 2 different volumes named v1 - v1 on node swarm-05a and on node swarm-06a. So swarm can't determine which one we need and therefore returns nil.
But!
6) docker volume inspect swarm-node-05a.cybertonica.aws/v1

[
    {
        "Name": "v1",
        "Driver": "local",
        "Mountpoint": "/var/lib/docker/volumes/v1/_data"
    }
]

Now back to docker-compose.
In version 2, looks like it tries to locate volume by its name, and fails as there several volumes with that name. And its quite logical, since Swarm can鈥檛 figure out which volume should be attached to your container.
Naive solution for this issue would be to add qualified volume names support in docker-compose, but, unfortunately, they are not allowed yet:

version: '2'

services:
  app:
    image: alpine:3.3
    volumes:
      - vault:/secrets
    command: ls -al /secrets

volumes:
  vault:
    external:
      name: "swarm-node-05a.cybertonica.aws/v1禄

And running it:

eric@iMac-Eric /V/D/W/g/s/c/p/d/t2> docker-compose up 
Recreating t2_app_1
ERROR: 500 Internal Server Error: create swarm-node-05a.cybertonica.aws/v1: volume name invalid: "swarm-node-05a.cybertonica.aws/v1" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed
eric@iMac-Eric /V/D/W/g/s/c/p/d/t2> 

@echupriyanov I opened a related issue in Swarm about The utility of creating a volume per node.

@everett-toews thanks for pointing to that issue. Very interesting.
Maybe I should post my comment there?

@aanand About the "links". Yes it is a very confusing area. I would suggest that they get marked as "obsolescent" and warned against when used in docker-compose.

@echupriyanov Thanks for the explanation of why volumes aren't seen in docker-compose. It's the clearest I've seen about it. This should really be documented more clearly.

I conclude from it that docker compose doesn't fully support volumes on swarm. There is simply no way to reference a volume created on a node. But that makes me wonder why volumes are created implicitly across nodes anyway. It's just too easy to get one of the copies to fall out of sync. A docker Data Container is the recommended way of sharing data across nodes. For failover, there's always Flocker or storage in an outside SAN.

@aanand , based on the help I got, I tried to make a short introduction on how to move from v1 to v2 for simple docker-compose.yml files, just putting all things together, and published on Medium as [https://medium.com/@giorgioto/docker-compose-yml-from-v1-to-v2-3c0f8bb7a48e#.ifzbdl7jl], just in case it can help someone.

@echupriyanov thank you very much for the description of the problem with "local named volumes", as mentioned by @gittycat is the clearest explanation on the problem.

I am having the same problem and following the comments by @gittycat, should we just consider the use of Flocker (https://clusterhq.com/flocker/introduction/) as the best current solution to the problem, particularly because it also addresses other issues such as node failure.

@pbaiz-amey an other good option is EMC Rex-ray. IMO, its easier in setup:

  • single binary
  • no need for central service
  • simple, clear CLI

But, of course, YMMV

Hi, I did not read the whole thing here but what I got from this discussion is that there is some bug when mounting external volumes when using docker compose Version 2 file format but it still works with Version 1.

Workaround for me (maybe someone will find this usefull) is to just use Version 1 file format.

Before (not working), Version 2 file format:

version: '2'
services:
  registry:
    restart: always
    build: .
    volumes:
      - auth:/auth
    ports:
      - 5000:5000

volumes:
  auth:
    external: true

After (working), Version 1 file format:

registry:
  restart: always
  build: .
  volumes:
    - /auth:/auth
  ports:
    - 5000:5000

@hoto Those two lines do completely different things.

  • auth:/auth mounts a volume named auth at /auth inside the container
  • /auth:/auth mounts the directory /auth on the host at /auth inside the container

If the latter is actually what you want, there's no need to go back to V1. This should work:

version: '2'
services:
  registry:
    restart: always
    build: .
    volumes:
      - /auth:/auth
    ports:
      - 5000:5000

Here's another work around:

  1. Decide which node will host the external volume and the container.
  2. Create the volume on that node (using local docker -H :2375)
  3. Add a service constraint in your compose file that locates your service on the same node.
  4. Run compose. The service will be deployed properly.

Here's what I would have expected compose up to do:

  1. List the volumes on the cluster.
  2. If the referenced volume doesn't exist on any node then compose should create it on the node where it's going to deploy the service and then deploy the service.
  3. If the referenced volume exists on more than one node, then compose should fail and report an error.
  4. If the referenced volume exists on a volume that doesn't satisfy the service constraints, then compose should fail and report an error.
  5. If the referenced volume already exists on a suitable node then compose should deploy the service there too.

@aanand oh, youre right... thanks, will go back to v2 then ;)

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

This issue has been automatically closed because it had not recent activity during the stale period.

Was this page helpful?
0 / 5 - 0 ratings