I read the overview here - https://docs.docker.com/compose/swarm/ :
If you’re using version 2, your app should work with no changes:
...
Once you’ve got [swarm] running, deploying your app to it should be as simple as:
$ eval "$(docker-machine env --swarm <name of swarm master machine>)"
$ docker-compose up
But when running docker-compose against a swarm, it provides a warning (below) and runs all docker-compose instances against the targeted master.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use the bundle feature of the Docker experimental build.
More info:
https://docs.docker.com/compose/bundles
This is a request to update the docs to point folks in the right direction (which I'm guessing is the .dab
experiment)
With Docker 1.12 if you're using the Docker Engine swarm mode, you need to deploy the DAB generated from the docker-compose.yml
file.
However, if you're using Docker < 1.12 and you do swarm with the swarm
image and docker-machine
the way explained in the docs should work.
Thanks for reporting. This confusion stems from the fact that Docker Swarm and Docker Engine 1.12’s swarm mode are different products with similar names. We should update the document to make the difference between them unambiguous.
Thanks for picking up that I was using the 1.12 release, I realize I didn't mention that initially.
And wow - yes I didn't expect swarm to be both a product and an engine feature. I'll try and wrap my head around the two swarms and suggest some docs that would make that clear to me at least.
So here's what I'm seeing:
docker swarm
commands are also for the engine mode.docker deploy
and engine mode (naively, these look like json encoded docker-compose files)Both versions of swarm seem to rely on an external Key-Value store for some networking and discovery, and both are variants on the DockerApi, so support plugins to different extents.
Hopefully that digging is helpful to someone else. This bug might be better titled Docker Swarm and Docker Engine Swarm should be clearly distinguished in documentation.
- so I'll change that and be on my way.
You've got it. Just one thing: the new "swarm mode" functionality doesn't rely on an external KV store - it works out of the box with any 1.12 Engine installation.
@aanand In Docker 1.12 Swarm (not swarm mode) do I still need a KV store for overlay networks? Slightly off-topic question, but still relatively on topic.
If you're setting up a cluster with Docker Swarm (e.g. by running docker run swarm
or docker-machine create --swarm-master ...
and then running standard docker run
commands against the swarm master), then yes - you need a KV store for overlay networking, regardless of the Engine version.
If you're using Docker Engine 1.12 in swarm mode (e.g. by running docker swarm init
and then using docker service
commands), you don't need one.
Sorry that this is confusing.
I've added a note to the doc in https://github.com/docker/compose/pull/3814
See also #3891 which was found independently
I must second, that docker/swarm and docker engine swarm-mode is utterly confusing. You should think about choosing a completely different name for the new swarm-mode, like cluster-mode or something.
Moreover, how can I try .dab files with docker deploy
? Creating a .dab file is fine with docker-compose
, but the docs say I need an experimental build for docker deploy
.
Which parts to be experimental ... docker-compose, docker client, server, manager, nodes, all of them?
@aanand If i use docker swarm init and docker service commands, what is the right way for this mode working with docker-compose?
And, btw, guess should keep this issue open until everything really get cleaned, just wasted tons of time here, jump here and there in the super confused documentations!
@leonard-sxy AFAIK, the right way is to use experimental bundle and deploy https://blog.docker.com/2016/06/docker-app-bundle/
@Vanuan Thank u, but expect pushing the image to docker hub, still don't have a way to get around of the 'images' issues.
@aanand @Vanuan I think Docker Inc. wanted to quickly come up with a simpler alternative to kubernetes so that it can latch on to the customers who were already familiar with docker and were considering options for container orchestration. That seems to be the most imp reason for releasing swarm mode as part of docker core without thinking about all the scenarios people will come up with in production deployment.
@mursilsayed I think it's just iterative programming. You introduce an experimental feature, you find more use cases, you fix bugs, etc.
@leonard-sxy It looks like in Docker 1.13 we'll be able to do this:
docker deploy --compose-file ./docker-compose.yml mystack
And it will be out of experimental
@Vanuan Would this eliminate the necessity of uploading your image to the Hub in order all of this to work though? If all these features go out of experimental, these seems to be the biggest issue.
@mangelov95 How do you imagine it? I think you wouldn't want inconsistent image versions on different machines. So you gotta distribute and version images somehow, right? Docker doesn't have an embedded way to distribute binaries across the swarm. So you gotta push your images to either private or public registry. docker-compose.yml
supports private registries, so I don't see the problem.
@Vanuan Wouldn't image tags be sufficient to specify the version of an image? The tag could be included into the compose file. Or you mean that people can build their own images with the same metadata (name and tag) and that would break the process?
@mangelov95 I mean that there are only 2 alternatives to put an image into a particular host:
Dockefile
to build an image from scratchThe second option requires you to upload it first to some place. Thus you either use Hub, self-hosted or a thirdparty image registry. Does it sound reasonable?
Or you mean that people can build their own images with the same metadata (name and tag)
Name and tag don't guarantee uniqueness. Even on Docker Hub images are continuously updated under the same tag (minor bug fixes, OS and security updates, etc). So you can't use option 1 to distribute images across the swarm, because image builds are not reproducible. Each image built is unique.
Is the first way possible right now? I would be happy, if the first way
is made possible. Just some alternative to having to upload images to
the Hub. I wouldn't want to add my images which contain security details
to the Hub, and the limitation of 1 private repo on the Hub (for the
free version, the option to pay for another plan is not viable to me)
adds to the problem. Having an option to auto build from Dockerfile
would be great.
On 24/11/2016 02:32, John Yani wrote:
>
@mangelov95 https://github.com/mangelov95 I mean that there are only
2 alternatives to put an image into a particular host:
- use a |Dockefile| to build an image from scratch
- download image from elsewhere
The second option requires you to upload it first to some place. Thus
you either use Hub, self-hosted or a thirdparty image registry. Does
it sound reasonable?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/3804#issuecomment-262676562,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKb297Sl9jdjJAskjAcu3b7ecuamRCIfks5rBPcigaJpZM4JbI6J.
Is the first way possible right now
@mangelov95 Yes, but without a swarm. As I said, it doesn't make sense (read "not possible") in a swarm since images wouldn't be identical. You wouldn't want your deploys to fail randomly.
This is starting to be offtopic, but here's the best practice:
For additional questions use docker forums.
Most helpful comment
I must second, that docker/swarm and docker engine swarm-mode is utterly confusing. You should think about choosing a completely different name for the new swarm-mode, like cluster-mode or something.
Moreover, how can I try .dab files with
docker deploy
? Creating a .dab file is fine withdocker-compose
, but the docs say I need an experimental build fordocker deploy
.Which parts to be experimental ... docker-compose, docker client, server, manager, nodes, all of them?