Compose: docker-compose bundle warnings

Created on 3 Jul 2016  路  14Comments  路  Source: docker/compose

I'm trying out the docker-compose bundle command but am getting some bunch of WARNINGS below:

WARNING: Unsupported key 'container_name' in services.master - ignoring
WARNING: Unsupported key 'volumes_from' in services.config - ignoring
WARNING: Unsupported key 'volumes' in services.config - ignoring
WARNING: Unsupported key 'container_name' in services.config - ignoring
WARNING: Unsupported key 'links' in services.keycloak - ignoring

Now, when I deploy the generated bundle, the services / containers doesn't work and I feel the WARNINGS are kind of errors that my docker-compose file isn't compatible with the new bundle file requirements. I can fix the warnings on container_name, but not sure how I would fix the warnings on volume_from, volumes and links in my compose file.

How do I fix the warning message and Is there any sort of reference for the new docker-compose bundle command ?

Here is my docker-compose file:

version: "2"

services:

  master:
    container_name: "citus_master"
    image: "citusdata/citus:5.1.0"
    ports: 
    - "5432:5432"
    labels: 
    - "com.citusdata.role=Master"

  worker:
    image: "citusdata/citus:5.1.0"
    ports: 
    - "5433:5432"
    labels: 
    - "com.citusdata.role=Worker"

  config:
    container_name: "citus_config"
    image: "citusdata/workerlist-gen:0.9.0"
    volumes: 
    - "/var/run/docker.sock:/tmp/docker.sock"
    volumes_from: 
    - "master"

  keycloak:
    image: "jboss/keycloak-postgres"
    links: 
    - "master:postgres"
    ports: 
    - "8080:8080"
    environment: 
    - KEYCLOAK_USER=admin 
    - KEYCLOAK_PASSWORD=password 
    - POSTGRES_DATABASE=postgres
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgres
    - POSTGRES_PORT_5432_TCP_ADDR=postgres
    - POSTGRES_PORT_5432_TCP_PORT=5432

  pgweb:
    image: "sosedoff/pgweb"
    ports: 
    - "8081:8081"

Most helpful comment

It seems like pretty much everything is missing right now (especially security configurations!), at least on our docker-compose.yml :

cap_add
cap_drop
container_name
cpu_shares
depends_on
ipv4_address
links
mem_limit
networks
read_only
restart
volumes

So I'm really not sure how one is supposed to use docker swarm in practice except for launching a single service by hand.
Has anyone seen a complex example working?
Or more importantly, are all the arguments to docker run being ported to docker services (and docker-compose)?

All 14 comments

Bundles are still experimental right now. They are intended to represent 'portable' deployable services. I don't know, but my guess is that those things won't be supported as they are things that tend to tie containers to either a specific host or a specific number of instances or a specific topology.

'container_name' - Only really works for one instance
'volumes_from' - Not portable / topology-specific
'volumes' - Only one that makes sense to include, named volumes using some distributed storage driver (flocker etc...)
'links' - Are deprecated, docker native networking supersedes this.

@johnharris85 thanks!

Bummed that volumes is missing. Makes it not useable for a good 1/3 of our current apps. Would also like to see it support the new global swarm mode. (Think deploying something like cadvisor or other monitoring stacks.)

It seems like pretty much everything is missing right now (especially security configurations!), at least on our docker-compose.yml :

cap_add
cap_drop
container_name
cpu_shares
depends_on
ipv4_address
links
mem_limit
networks
read_only
restart
volumes

So I'm really not sure how one is supposed to use docker swarm in practice except for launching a single service by hand.
Has anyone seen a complex example working?
Or more importantly, are all the arguments to docker run being ported to docker services (and docker-compose)?

It looks like volumes are supported in docker service create (though with some new syntax, --mount).

But how does it work exactly?

  • Do I have to use some global/network filesystem? Or consistency is not important?
  • Should docker should support replicated filesystem itself? Or docker would just error out when I try to scale service with volumes?
  • How does it move to another host if it goes down?
  • What happens to shared volumes, if one container on one host and another on another host?

These questions need to be answered before we expect docker-compose bundle to magically work with volumes.

Has anyone seen a Roadmap showing from which future version will the bundle feature actually become usable fully?

docker-compose 1.9.0-rc4 doesn't seem to support any new options and therefore it seems impossible right now to migrate to swarm with 1.13 if you care about security or any other options.

FYI I make a similar request for cap_add and tmpfs support (needed to run systemd) in: https://github.com/docker/compose/issues/4441

deploy is also missing, which is specifically for swarm-mode IIRC.

So, swarm-mode strips functionality that was present using docker-compose and, while swarm-mode supports compose files, it ignores most of what makes one use compose files in the first place. I love Docker, I really do, but swarm-mode is a complete regression.

I don't think we'll do any more work on the bundles / DAB format at this point. docker stack / v3 is the logical continuation of that effort.

So, swarm-mode strips functionality that was present using docker-compose and, while swarm-mode supports compose files, it ignores most of what makes one use compose files in the first place. I love Docker, I really do, but swarm-mode is a complete regression.

A lot of the options that were removed in v3 aren't applicable to distributed environments (clusters). Other things are a work in progress (happening here: https://github.com/docker/swarmkit)

I respectfully disagree 100%. The missing piece is that things aren鈥檛 fully shared across the swarm. Docker Hub credentials, cached images, etc. Further, how are things like env files and builds not applicable to a distributed environment? But, in the end, swarm-mode is dead anyway, so it really doesn鈥檛 matter much, does it?

The missing piece is that things aren鈥檛 fully shared across the swarm. Docker Hub credentials,

Not sure what that specific piece has to do with Compose. It's also a one-time setup thing.

cached images, etc.

You mean pulled images? Ideally I agree this would be handled. Pretty sure it's covered under the "work in progress" clause.

Further, how are things like env files

??? https://docs.docker.com/compose/compose-file/#env_file

and builds not applicable to a distributed environment?

If node A goes down and node F goes up to replace it and needs to start a task for service X, it's unlikely it'll have the files to build the underlying image. It makes sense that services would instead rely on pullable images.

But, in the end, swarm-mode is dead anyway, so it really doesn鈥檛 matter much, does it?

k

Not sure what that specific piece has to do with Compose. It's also a one-time setup thing.

You closed this as development has been moved to swarmkit, ergo I brought it up. Anyway, credentials are not necessarily a one-time thing and, even then, that has nothing to do with all swarm members not having access to the credentials. Basically, it shouldn't be required to pass --with-registry-auth...it's pointless and should be done automatically given that any workload can run on any node. Besides, shouldn't swarm managers hold the credentials and pass them as needed? Granted, ths has more to do with the awful way in which docker engine manages Hub credentials, but it's still something that should be fixed.

Further, how are things like env files

Env file parsing and var substitution/interpolation are not handled when using swarm...which IS a function of compose.

If node A goes down and node F goes up to replace it and needs to start a task for service X, it's unlikely it'll have the files to build the underlying image. It makes sense that services would instead rely on pullable images.

You store the built image and make it available to all nodes. How does that not make sense. The simple fact is that, if I'm using swarm-mode, I MUST have some external registry, somewhere. It adds layers of complexity and dependencies that shouldn't be needed.

k

And yes, Kubernetes won this war, unfortunately. Swarm was much simpler for smaller deployments, but it's mired in slow progress and was probably rolled out before it should have been. Progress is slow as the project has gotten larger. Feature velocity has increased, but the features seem to be missing the mark for a pretty large chunk of the user base.

Lastly, the fact that full compose functionality is not supported in swarm-mode, combined with the lack of DAB development, and the fact that over a year has gone by and there's still no good way to deploy stacks in an automated fashion severely limits the usefulness of this in a production environment.

I don't have the answers, and not even sure if anyone cares, but look at the Slack channel and Github you'll see just how disappointed many users are with the current state.

Was this page helpful?
0 / 5 - 0 ratings