Moby: Static/Reserved IP addresses for swarm services

Created on 30 Jun 2016  ·  83Comments  ·  Source: moby/moby

There are some things that I want to run on docker, but are not fully engineered for dynamic infrastructure. Ceph is an example. Unfortunately, its monitor nodes require a static ip address, otherwise, it will break if restarted. See here for background: https://github.com/ceph/ceph-docker/issues/190

docker run has an --ip and --ip6 flag to set a static IP for the container. It would be nice if this is something we can do when creating a swarm service to take advantage of rolling updates and restart on failures.

For example, when we create a service, we could pass in a --static-ip and --static-ip6 option. Docker would assign a static ip for each task for the life of the service. That is, as long as the service exists, those ip addresses would be reserved and mapped to each task. If the task scales up and down, then more ip addresses are reserved or relinquished. The ip address is then passed into each task as an environment variable such as DOCKER_SWARM_TASK_IP and DOCKER_SWARM_TASK_IP6

arenetworking areswarm kinfeature

Most helpful comment

Agreed, this is crucial functionality for some services. I suspect it might be a bit complicated to implement with in new swarm model, as for every 'service' at least two IP addresses exist: one Virtual LB IP for the service itself and then N of additional IPs, where N = number of replicas.

I think what we really need is an option to deploy a service without replicas and LB layer - just with simple static IP configured (but still managed by swarm, with clustering, HA, failover, etc).

All 83 comments

Agreed, this is crucial functionality for some services. I suspect it might be a bit complicated to implement with in new swarm model, as for every 'service' at least two IP addresses exist: one Virtual LB IP for the service itself and then N of additional IPs, where N = number of replicas.

I think what we really need is an option to deploy a service without replicas and LB layer - just with simple static IP configured (but still managed by swarm, with clustering, HA, failover, etc).

This could be very useful, as I'm currently struggling with ActiveMQ Network of Brokers, and the activemq.xml configuration file requires static IP addresses of brokers in the case of _static discovery_ ...

I am facing the same issue! is there any update to set a static ip for a container in swarm mode ?

Please don't leave +1 comments on issues, you can use the :+1: emoji in the first
comment to let people know you're interested in this, and use the subscribe button
to keep informed on updates.

Implementing this feature is non-trivial for a number of reasons;

  • There's _two_ possible feature requests here;

    • Allow a static IP for the _service_ (Virtual IP)

    • Allow a static IP for the container (task)

  • When looking at static IP-addresses for _containers_, things become complicated,
    because a service can be backed by multiple tasks (containers). Specifying a
    single IP-address for that won't work; also, what to do when scaling, or updating
    a service (in which case new tasks are created to replace the old one)

Just +1's don't help getting this implemented; explaining your use-case, or
help find a design to implement this on the other hand _would_ be useful.

Also see https://github.com/moby/moby/issues/29816, which has some information for _one_ use-case.

I removed the +1 comments, and copied the people that left a +1 below so that they're still "subscribed" to the issue;

@a-jung
@adiospeds
@darkstar42
@dzx912
@GuillaumeM69
@isanych
@jacksgt
@joseba
@martialblog
@nickweedon
@olivpass
@prapdm
@wubin1989

Essentially, I'd like to use Docker Swarm's routing mesh as a load balancer. Being able to assign a (public) IP to a Docker Swarm service (not an individual container!), one could simply add the IP to one's DNS provider (e.g. CloudFlare). For example I could then run my S3 service and web server with:

docker service create --hostname s3.docker.swarm --publish 80 --ip dead::beef minio/minio
docker service create --hostname www.docker.swarm --publish 80 --ip cafe::babe nginx

Swarm then tells the node which has the appropriate IP in its subnet to forward all request from dead::beef to one of the s3.docker.swarm-containers and cafe::babe to one of the www.docker.swarm-containers (within the routing mesh).

This way, one could also run multiple services on the same port within the same Swarm cluster (assuming different IP addresses and different domains). This is currently only possible with an additional load-balancer such as HA-Proxy (which only supports TCP for example).

See also: https://devops.stackexchange.com/questions/1130/assign-dns-name-to-docker-swarm-containers-with-public-ipv6

Some licensed applications require a static IP for the license. I have licensed applications that I would like to deploy to Docker Swarm that require either static IP or MAC address for licensing.

I think it would be initially acceptable to state a limitation that specifying either a static IP or MAC for a service implies that the scale must be one.

Perhaps when scaling, the new replicas will fail to start with a "static IP required" error message informing the user that they need to go back and provide static IP for a "static IP" service to start correctly?

So in docker service ls you would see 3/5 replicas started and the remaining 2 replicas would have the "static IP" error message until resolved.

This may help some people: if you combine the 'hostname' setting for a service with the 'endpoint-mode' setting to dnsrr you get a hostname that resolves to the container IP. This may make some software happy to run in the swarm.

any news ? can we assign a static ip to a service ?

I searched around and tried different possibilities and I was able to assign static ips to containers. I am pretty new to docker, I don't know if this is the right way though.

I created a swarm. In manager, I created an attachable overlay network with a subnet.

docker network create -d overlay --subnet=10.0.9.0/22 --attachable overlay-network

My docker compose:

version: "3.2"

services:
  hadoop-master:
    image: devhadoop-master
    container_name: hadoop-master
    .
    .
    .
    networks:
      overlay-network:
        ipv4_address: 10.0.9.22

networks:
  overlay-network:
    external: true

Note the configuration under default networks where external is true. That is what made it work.
Though I created the overlay-network with attachable flag, service in my worker was not able to connect to it. So I ran a docker run --rm --net overlay-network alpine sleep 1d to make overlay-network discoverable, then my worker-service able to connect to it.

We recently ran into a situation where the ability to reserve a VIP for a service would really help.

We have a zookeeper service named "zoo1" and other services are connecting to it using "zoo1:2181". We found sometimes when we shut down and restart the "zoo1" service, the IP address resolved by host name "zoo1" changed. The reconnection mechanism in the zookeeper client library does not try redoing the DNS lookup when it enters the retry loop, instead, it holds onto the previous IP address. As a result, even we bring the "zoo1" service back online, other services are never able to re-establish the connection to zookeeper anymore.

By the way, under what exact circumstance would trigger a change to a VIP of a service?

@cpclass you example don't work

I don't see macvlan mentioned here anywhere, so here goes! I think this would be immensely useful for setting up the same macvlan network on all nodes in a cluster, then being able to give a service a static IP address that can then roam between machines in the cluster but keep the same IP address via macvlan (so --ip would have to be a per-network option, which is probably obvious but worth being explicit about).

(As it stands, services can't easily both talk to other services in the cluster _and_ do multicast with other machines on the host network, which is what this sort of thing would enable.)

(Edit: there's a great post about how macvlan in a cluster might work that describes exactly the thoughts I had at https://github.com/moby/moby/issues/25303#issuecomment-239949698 -- using either an IP address range or an interface glob to auto-match the appropriate parent interface for each host)

macvlan may be a better solution instead of host networking in this case but it's not supported by services. Also, a macvlan network need to be setup on each host in the cluster as it is implemented right now. Maybe some cluster wide setup may be possible to achieve by giving the "docker network" command an interface match condition, like IP-network address or interface name. Then a macvlan network may be setup on all hosts matching the condition in just one command. For the "docker service" command the network specification can be handled as a constraint so that containers are not scheduled on a host that does not have the specified network defined.

To add a usecase. We want to add some services like

  • pypi-daemon for hosting own packages,
  • smtpd for incoming email,
  • dnsd,
  • ftpd (with some plugins which talk to our internal APIs)
  • munin
  • redmine
  • ...

as a dedicated services stack within our swarm. Some of the services contain patches or even are our own implementations. Ideally they are tested by AQA and deployed and teared down automatically. Some need to talk to other stacks like the productions stacks.

Some of these services need to be reachable with a fixed IP from internal and/or external networks.

another use case:

use a container as DNS server for other containers.

(this could be easier to achieve if you could use a docker container hostname as dns server entry via --dns)

Another use case:
DualStack Web services with multiple replicas in Docker Swarm. Being able to assign static v4 and v6 IPs would be great for this :)

Another use case:
I have an application (containerised) that reads from IP based sensor devices.
This application can only be configured with the IP address of the sensor devices.
For testing, I want to run simulated sensor devices (containerised) on fixed IP addresses.

Use case: Running a STUN or TURN server, or other similar type of server used to help p2p clients find each other to accomplish p2p discovery and NAT Traversal. This is required for technologies like WebRTC.

Use case: Samba ad domain controller, with a changing IP it just makes s mess of its DNS zone file and is not usable.

Another use case: setting up a mysql cluster (mgm + ndb + sql nodes), requires giving the nodes IPs in the conf files

@khba that applies to a lot of other clustered things. For example Vertica database also uses static IPs for all nodes in the cluster.
And the reason I found this issue is that I am playing with distributed filesystem called Lizardfs, which would also benefit from having static IPs for nodes holding the data chunks...

@khba I have been able to get MySQL cluster to run in Docker Swarm by using 'dnsrr' networking mode instead of VIP. Then the inbound connection is the same IP that the hostname resolves to and MySQL is happy. Still not ideal though.

@drnybble thanks for the information, yes "endpoint_mode: dnsrr" fixed the static ip problem for mysql cluster, for others having the same issue this is what helped me : https://bugs.mysql.com/bug.php?id=87043

@drnybble could you share your compose yml? I've been trying to do the same thing, but even with dnsrr instead of vip I'm still having no luck

There is nothing too special about my service definition:

  mysql-mgmt:
    image: ${MYSQL_CLUSTER_IMAGE}
    networks:
      mysql_ndb:
    hostname: mysql-mgmt
    command: ndb_mgmd --config-cache=FALSE -f /etc/mysql-cluster.cnf
    volumes:
      - mysql_mgmt:/var/lib/mysql
    deploy:
      endpoint_mode: dnsrr

I did find other problems bringing up the cluster, in particular the fact that a service's hostname may not resolve immediately even after the service replica starts up. I recommend polling until the service's hostname resolves in an entrypoint bash script before proceeding to start up the mysql processes.

In our use case we want to provide the unbound resolver as a docker image in our mail distribution. As a DNS resolver it needs to be reachable by an IP address, and host-name would not work. It works fine for users using docker-compose. But deploying the same configuraton on docker stack would be impossible if we can't define a fixed IP.

Use case:
Run plenty of workers which connecting to a financial exchange to grab fresh information several times per second. The financial exchange has a limit for requests from a unique IP address.
It will be very useful to start one service (a few tasks) with the ability to send requests via various network interfaces of the host. But without breaking Swarm HA features and using network of the host.

Is there ever a case where the VIP allocated for a service changes?

I am currently relying on them being fixed to avoid problems when Docker Swarm does not resolve a service hostname when there are no healthy replicas at the moment the hostname is resolved (which I consider a design flaw).

I am extracting the VIP from the service and passing it using extra_hosts so it always resolves.

Services don't have IPs directly, they're a group of individual containers running the same image -- and those containers _definitely_ change IPs for a variety of reasons. It's not safe to rely on them being at a consistent address unfortunately. (This is causing us problems with a clustered application that will not accept hostnames.)

I'm not talking about tasks/replicas but the VIP allocated to a service (or more particularly, on a given network).

Use case:

Service A cannot access multiple workers in service B (DNS) via static IP


Service A = pihole which takes DNS1 IP as param
Service B = dnscrypt x4

service A cannot contact B using fixed INTERNAL IP because of that issue.

@EsEnZeT can you open up what you mean with that? What app it is? Why IP should be static instead of fixing application working with DNS?

@olljanat Pihole acts as a DNS server, and requiring DNS to find a DNS server ends in a Catch-22, this could be what @EsEnZeT is referring to.

@EsEnZeT @cyberjacob I'm not familiar with those components and I was not able to figure it out on short study that how those actually need to deployed.

Can you share more info how you plan to deploy those and how client connects to then (picture would be super)?

Also what I quickly looked I was not able to found answers to these critical questions which @thaJeztah asked already two years to ago. We are not able to do anything for this without getting clear understanding to these:

Implementing this feature is non-trivial for a number of reasons;

  • There's _two_ possible feature requests here;

    • Allow a static IP for the _service_ (Virtual IP)
    • Allow a static IP for the container (task)
  • When looking at static IP-addresses for _containers_, things become complicated,
    because a service can be backed by multiple tasks (containers). Specifying a
    single IP-address for that won't work; also, what to do when scaling, or updating
    a service (in which case new tasks are created to replace the old one

Is there ever a case where the VIP allocated for a service changes?

@drnybble afaik VIP stay as same as long service exists. So you can safely point to that IP as long you do not ever remove that service or remove it from network.

PS. I don't promise to implement this feature but I'm seriously considering to do so if someone is able to explain some real world production use case which cannot be easily solved on some other way.

I'm seriously considering to do so if someone is able to explain some real world production use case which cannot be easily solved on some other way.

A significant amount of software simply requires IP addresses and will not accept hostnames. The effort required to add support for static IPs is significantly less than the effort required to re-engineer literally thousands of other projects.

A significant amount of software simply requires IP addresses and will not accept hostnames. The effort required to add support for static IPs is significantly less than the effort required to re-engineer literally thousands of other projects.

@tylermenezes that is probably true but as I'm contributing this project on my spare time which I have only limited amount so I really want to use it on smart way. So still waiting for real world use case which cannot be easily solved on some other way...

@olljanat here's one: have a caching dns server as a container and some other service on another container using the dns caching server on the other container, all actually on a swarm service.

Another use case could be to bind services to internal interfaces (e.g. VPN or intranet) without exposing them to the internet on a system which has multiple interfaces.

Wondering if there's anyone who's working on this or has built a test branch to get this to work?

@mdbraber I did some investigations and and ended up to thinking that best options would be create engine managed IPAM plugin as it gives us much more flexibility to support different kind of use cases without need to stick to Docker release cycles.

There is actually quite good template for that work on https://github.com/ishantt/docker-ipam-plugin where I have asked developer to add license file so we can use that code.

Unfortunately docker service create command currently only supports network driver options (e.g. --network name=my-network,driver-opt=field1=value1) but not ipam-driver-opt so we will need to add that support to swarmkit first.

However that IPAM plugin would be separate project so if someone is interested to join the team to implement it you can contact me on Docker community Slack.

Use case: redis can only connect to IP addresses, not host names, so a static IP for the master redis node would be helpful

recently the company I work for start a new service which need to specify ip when it running and must set who can connect to by IP. With container's ip floating, we don't know how to run it appropriately. Hope there is way for static ip in swarm mode since this issue has last for years.

Another usercase is to host couchbase in a docker swarm environment. Couchbase nodes maintains IP configuration when configured in a cluster and fails to come up if IP changes during container restart. Here is a discussion around that.

Are there any updates on this topic?

@EsEnZeT this can be handled with IPAM plugin like explained on https://github.com/moby/moby/issues/24170#issuecomment-499032098 but looks that there is no anyone who would like take effort for it as no one have contacted me about it.

So it seems still not possible to use static IPs for swarm services? This is a big block for moving to swarm for non-hostname clustering applications. Not being able to assign a static IP programmatically is a big hindrance.

When looking at static IP-addresses for containers, things become complicated,
because a service can be backed by multiple tasks (containers). Specifying a
single IP-address for that won't work; also, what to do when scaling, or updating
a service (in which case new tasks are created to replace the old one)

Just +1's don't help getting this implemented; explaining your use-case, or
help find a design to implement this on the other hand would be useful.

@thaJeztah could you look "hash-based multipath routing".
https://serverfault.com/questions/696675/multipath-routing-in-post-3-6-kernels
Think this will allow implement this feature.

Also please look related
container publishing using "ip route"
virtual IP for bridge network

@thaJeztah could you look "hash-based multipath routing".
https://serverfault.com/questions/696675/multipath-routing-in-post-3-6-kernels
Think this will allow implement this feature.

Let me defer that one to @arkodg 😅, who's probably better able to tell if that's an option (also /cc @dperny)

explaining your use-case ... would be useful.

In my use-cases for statics, I am a home (non-corporate) swarm, I only have (intentionally) ONE container deployed for the service (it's not a scalable service) but I do have multiple swarm members, I don't care where it runs.

Plex Server

Plex needs to be told it's availability address on the LAN because the only address it can find is the container network address.

Ubiquity Video Server

unifi-video needs to be told it's availability address on the LAN because so it can tell the cameras (on the host network) where to reach it.

Usecase
We are running a telephony network stack in which every node has to have a static IP because many other old software that are connected to network are not Dockerizable and also do not support IP address change of telephony nodes. In our network, every task has the replica of 1 and will never need to scale up so there will always be a single container for each service. Yes, using docker-compose to run nodes in each host separately is an option for us but that will impose a lot of maintenance headache since we are going to have 30+ hosts.

@information-security for that kind of old softwares it is easier to use host networking.

Then containers will use host IP and there is no port mapping at all.
For example you can deploy nginx with command:
docker service create --name web1 --network host nginx
and if you then connect to Docker node IP address with browser you will see web page from nginx.

Also note that if you now try deploy second nginx ( docker service create --name web2 --network host nginx ) or try scale that first service on one node swarm ( docker service scale web1=2) it will fail because with one reserves port 80 from host. Those you can solve either using --replicas-max-per-node 1 when create service or by modifying listen 80 row on /etc/nginx/conf.d/default.conf for second service.

If you have need more complex setup then macvlan is probably way to go https://collabnix.com/docker-17-06-swarm-mode-now-with-macvlan-support/

@information-security for that kind of old softwares it is easier to use host networking.

Then containers will use host IP and there is no port mapping at all.

One problem with this approach is that you cannot mix host and overlay networking in a container, e.g. to have it listen on a host IP and port, but also be able to connect to internal overlay network services.

For @mback2k

you cannot mix host and overlay networking in a container

That possible using script.
https://github.com/docker/compose/issues/3532

@olljanat Thanks for your suggestion. This approach is somehow awkward when there are 10+ services running in each node and each service (in our case with replica of 1) has many open ports communicating with other services' ports. Due to security reasons, running in network host mode in our setup then needs extra work of managing firewall rules for each host and in larger setups it can be error prone. Besides that, Why not just benefit from isolation that docker overlay network provides?

Additionally, what @mback2k have mentioned is another big problem with this approach. Generally speaking in large-scale projects things get worst when there is no simple solution of assigning a static IP in swarm. What @sergey-safarov is proposing as a solution to this, is just another workaround which needs maintenance of an extra script that can get complicated in complex setups. Why not do this in a clean way by just letting swarm services with replica of 1 to have a static IP? If this is a difficult-to-implement feature with current code base of docker/libnetwork, I can understand that but we can not ignore the fact that this is a must-have feature for large-scale projects and prevents a lot of headache for developers.

In our company we mainly appreciate the network isolation that docker provides and it gets very interesting when it comes to swarm which allows us to manage many hosts in a single manager. With all due respect please do your best to keep docker this way otherwise there will be no benefits when we are obliged to implement a lot of scripting and additional (mostly manual) workarounds for every requirement we have.

@information-security think all understand that static IP feature must-have for docker swarm service.
Docker team must add feature that not break docker project development. Liink
That is reason why static IP not added at present time.

I sure, approach "hash-based multipath routing" allow implement static IP for swarm service. Link.
Required developer who is able implement feature and create PR.

@sergey-safarov It's really heartwarming to hear that everyone (specifically ppl from Docker team) here has agreed on the importance of implementing this feature. BTW, your work for the script of linking overlay and host networks is highly appreciated. I saved it because I'm sure we will need this sometime soon down the road for another scenario.

Regarding @thaJeztah's notes:

Implementing this feature is non-trivial for a number of reasons;

  • There's _two_ possible feature requests here;

    • Allow a static IP for the _service_ (Virtual IP)
    • Allow a static IP for the container (task)
  • When looking at static IP-addresses for _containers_, things become complicated,
    because a service can be backed by multiple tasks (containers). Specifying a
    single IP-address for that won't work; also, what to do when scaling, or updating
    a service (in which case new tasks are created to replace the old one)

I think we need to simplify things here. There's no need (or at least I don't see any) for containers of a service when scale is greater than 1 to have static IPs. That's mainly because they are running same images with same config files inside.

So I guess following single feature request will suffice for most of scenarios (I am open to corrections/improvements):

One would assign a static IP for a service like below:

services:
  service_1:
    deploy:
      replicas: 1
      endpoint_mode: (dnsrr | vip)
    networks:
      mynet:
        ipv4_address: 172.0.0.2

or

docker service create \
  --replicas 1 \
  --endpoint-mode (dnsrr | vip) \
  --ip 172.0.0.2 \
  image:latest

Then Docker would behave as below:

  • If replicas is not set to 1 then an error is shown.
  • Else if entrypoint-mode is set to vip, Docker will set a static VIP address during lifetime of this service.
  • Else, entrypoint-mode is set to dnsrr therefore Docker will set a static IP for the single task of service. And will keep that ip reserved for any task rescheduling.
  • Additionally, Upon scaling a service to greater than 1, if it has already a configured static ip address, an error can be shown letting user know about the existence of static ip.
  • Finally, updating a service takes effect in two scenarios:

    1. entrypoint-mode is set to vip and:



      • User wants to assign static-vip to a service and that IP is free (unassigned): Simple!


      • User wants to assign static-vip to a service and that IP is statically taken for another task/vip: An error is shown.


      • User wants to assign static-vip to a service and that IP is dynamically assigned to another task:





        1. Task holding the IP must be stopped.



        2. Freed IP will be assigned as VIP of target service.



        3. Task killed in (a) will be started and this time will choose a new ip from pool.





      • User wants to assign static-vip to a service and that IP is dynamically assigned to another VIP:





        1. VIP of the service which is holding requested IP can be changed instantly and new ip is assigned to it from pool.



        2. Freed IP will be assigned as VIP of target service.





      • User wants to remove static-vip of a service: No need to do anything.



    2. entrypoint-mode is set to dnsrr

      The running task needs to be stopped first and then new task will replace it with following criteria being considered:



      • User wants to assign static-ip to a service and that IP is free (unassigned): Simple!


      • User wants to assign static-ip to a service and that IP is statically taken for another task: An error is shown.


      • User wants to assign static-ip to a service and that IP is dynamically taken for another task:





        1. Task holding the IP must be stopped.



        2. Target task will be stopped



        3. A new task with requested IP is started for target service.



        4. Task killed in (a) will be started and this time will choose a new ip from pool.





      • User wants to assign static-ip to a service and that IP is dynamically taken for another VIP:





        1. VIP of the service which is holding requested IP can be changed instantly and new ip is assigned to it from pool.



        2. Target task will be stopped



        3. A new task with requested IP is started for target service.





      • User wants to remove static-ip of a service: Upon replacement of new task Docker must make sure that IP address of newly created task is not same as old static-ip (Just a precaution to prevent an additional service restart)



Unfortunately we don't have a programmer familiar with Go language in our team otherwise we would be more than happy to submit a PR 😔

@olljanat I would love to know your opinion as well.

@olljanat I would love to know your opinion as well.

Limiting service replicas to 1 and using endpoint mode dnsrr is most probably easiest way to get his this implemented. Unfortunately I don't expect to see that Docker team would implement it. That is because this issue have been open for long time already and they are lack of resources. Especially when it comes to swarm features.

Anyway, when I was thinking this one now I noticed that on next version of Docker it should be possible to set static IPs for containers with very simple workaround. I created gist of that option and you can comment it on there: https://gist.github.com/olljanat/b96ed26583c452118313fc18e4a663c1

Unfortunately we don't have a programmer familiar with Go language in our team otherwise we would be more than happy to submit a PR 😔

Well, I'm not programmer at all (except as hobby) and I had zero experience of Go before I started to contribute Moby/Swarmkit projects. I just did not see any other way to get those issues/lack of features which are critical for us fixed so I decided to use it as learning experience. It is doable by anyone who have time and interest to do that ;)

Well, I'm not programmer at all (except as hobby) and I had zero experience of Go before I started to contribute Moby/Swarmkit projects. I just did not see any other way to get those issues/lack of features which are critical for us fixed so I decided to use it as learning experience. It is doable by anyone who have time and interest to do that ;)

@olljanat If you say so then I'd personally have a look on docker repository to see if I can implement this. Any advice on where to start would be appreciated...
Different locations of code base I need to be aware of:

  • Docker engine's networking stack responsible for IP assignment in swarm
  • Docker CLI
  • Docker compose file parser
  • Any existing best code practices document for docker repositories that I can go through before writing code?
  • Any previously related submitted PRs that I can use as reference?

Anyway, when I was thinking this one now I noticed that on next version of Docker it should be possible to set static IPs for containers with very simple workaround. I created gist of that option and you can comment it on there: https://gist.github.com/olljanat/b96ed26583c452118313fc18e4a663c1

I didn't see your workaround while searching the net for this issue. I think your gist is the best approach at the moment to get things done until we get this feature implemented. I appreciate your efforts, this can help us make a progress in our project.

Limiting service replicas to 1 and using endpoint mode dnsrr is most probably easiest way to get his this implemented. Unfortunately I don't expect to see that Docker team would implement it. That is because this issue have been open for long time already and they are lack of resources. Especially when it comes to swarm features.

If everyone agrees on limiting replicas to 1 then I will start implementing this with the workflow I mentioned above. I need more support so I wouldn't implement a feature that nobody wants.

@olljanat If you say so then I'd personally have a look on docker repository to see if I can implement this. Any advice on where to start would be appreciated...

First we need to have some place to store those static IPs and when I now did bit investigations it looks to be that NetworkAttachmentConfig on swarmkit side already have addresses field:
https://github.com/docker/swarmkit/blob/6894bdea839f0a2f0e92557c1f1ab4afd149852b/api/types.proto#L582-L599
but that is missing from Moby side
https://github.com/moby/moby/blob/8e610b2b55bfd1bfa9436ab110d311f5e8a74dcb/api/types/swarm/network.go#L97-L102

You can see same if you add service with long version of network definition like --network name=test,alias=web1,driver-opt=field1=value1
and then try inspect that service:

                "Networks": [
                    {
                        "Target": "sihycl685z5vtyq3zw5pkayk7",
                        "Aliases": [
                            "web1"
                        ],
                        "DriverOpts": {
                            "field1": "value1"
                        }
                    }
                ],

So suggest that you start by checking if that value can be used by example just adding hardcoded line like

Addresses: []string{"10.0.0.11/24"}

to:
https://github.com/docker/swarmkit/blob/486ad69951868792367a503042ef3f55b399cc40/cmd/swarmctl/service/flagparser/network.go#L26-L30

Then you can build and run swarmkit based on instructions on there.

So I guess following single feature request will suffice for most of scenarios (I am open to corrections/improvements):

I think there may be different expectations as well. In your example, the service would get a fixed IP address, but that IP-address would not be a _routable_ address (it’s the internal IP address of the service). While that would solve some use-cases, it likely won’t solve others; thinking about users wanting to expose a service _only_ on “localhost”, so “publish” a port, but only make it accessible on 127.0.0.1 (so -p 127.0.0.1:80:80).

I think this is the most frequently mentioned use-case. Possibly this would be possible when using “host-mode” publishing (i.e., the containers backing a service are published directly), but might be more complicated for situations where the routing-mesh is used (?)

Slightly related (see https://github.com/moby/moby/issues/25257) so not sure if it should be taken into account for this feature specifically, but users wanting to publish a service on (e.g.) an internal network, which could be in situations where the host has multiple network interfaces, and some services should be accessible publicly (e.g. publish -p 443:443 to make a web service accessible through the routing mesh), and have other services (e.g. an admin panel) only accessible on a private network.

I think there may be different expectations as well. In your example, the service would get a fixed IP address, but that IP-address would not be a _routable_ address (it’s the internal IP address of the service). While that would solve some use-cases, it likely won’t solve others; thinking about users wanting to expose a service _only_ on “localhost”, so “publish” a port, but only make it accessible on 127.0.0.1 (so -p 127.0.0.1:80:80).

Agreed. As you have previously mentioned we are probably facing multiple different feature requests.

Slightly related (see #25257) so not sure if it should be taken into account for this feature specifically, but users wanting to publish a service on (e.g.) an internal network, which could be in situations where the host has multiple network interfaces, and some services should be accessible publicly (e.g. publish -p 443:443 to make a web service accessible through the routing mesh), and have other services (e.g. an admin panel) only accessible on a private network.

Considering what we have discussed so far, I list three different feature requests below:

  1. Internal static-IP assignment of a service (Like the workflow I suggested).
  2. Static-IP assignment outside swarm network (per host interfaces).
  3. Ingress network partitioning (#25257)

I think you agree with me that these feature requests are different in their nature with assigning a static IP to a service.and therefore we better to split them into different implementations. So, is it OK if we submit three different PRs for each of above? I'd go for implementing first one for now.

BTW, please let me know if that list needs to be expanded.

Any update on this?

any update ?

One more approach to implement static IP with load balancing.
To do this may be used eXpress Data Path (XDP).
This technology used by:

  1. cilium.io - good video load balancing related at 23:00;
  2. Fasbook Katran;
  3. Cloudflare.

Trying to migrate macvlan+glusterfs containers with static --ip to swarm service and see that there are no option to implement it. This was frustrating. Trying to bypass this inconvenience using this ugly approach.
Create ovelay network

docker network create -d overlay --subnet=192.168.10.0/24 --gateway=192.168.10.1 overlay_net
docker service create --name arnold01 --hostname arnold01 --network overlay_net --replicas=1  nginx

Locate the container id/name
docker node ps $(docker node ls -q)

Attach static macvlan ip in located container instance
docker network connect macvlan_net --ip=192.168.0.151 arnold01.1.m6obpe63v2qqhnyjd44m6xnh4

I think you should be able to use macvlan with swarm services (if the network is created with --scope=swarm)

I have created and published the Overnode tool, which is basically multi-host docker compose without swarm, but it has some great features. It allows to assign static IP addresses too.

An example configuration for zookeeper / kafka can be found here: https://github.com/overnode-org/overnode/tree/master/examples/databases/kafka

image

I guess I don’t understand why this has to be so hard. Swarm already knows how to create a mesh network when it creates the service.

Can’t docker just auto-create a mesh network with a static /32 address that routes to the service mesh network?

If the static mesh exists, make that the public entry point for the service instead of the service mesh. It load balances but just to the service mesh.

What’s the issue with this not working? Since the static mesh is a single ip and arpable, all traffic to it should come in to the correct virtual interface. As long as it has a load balancer attached with the service mesh as destination, it should work fine.

Am I missing something here?

Since I saw a comment asking about use cases:

I'm running docker swarm on a series of RPIs to virtualize services. I could use kubernetes instead, but in my limited experience making helm charts for everything is less elegant that what I've had to do to deploy docker stacks.
I want to run pihole. I can if I confine it to one node- which is okay, I mainly just want it to work- but I'd like to give a VIP to the service so that my DHCP server can tell machines to look for DNS.

Instead my implementation results in all the requests getting routed through docker, through the same docker IP, which makes my logs relatively worthless if I want to identify what service is doing what. Also, I need to use a reverse proxy to make the server accessible from any node- though taking that up or down didn't fix my issue.

So finally I turned to macvlan in order to avoid this issue entirely- but surprise.

        networks:
            raw:
                ipv4_address: 192.168.0.3

becomes

                "raw": {
                    "IPAMConfig": {},
                    ...
                    "Gateway": "192.168.0.1",
                    "IPAddress": "192.168.0.2",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    ...
                },

Why can't you just use the IP I told you to use? Why?

If I just limit it with --ip-range and a /32 then it says I have run out of IPs, and fails to launch the service at all.

In my personal opinion, the main focus should be by service. If you need a different IP for each container, you can probably just make multiple services for anything small scale.

EDIT: https://github.com/moby/libnetwork/issues/2249 solved my issue with the run out of IPs. Apparently if I specify the default, it works, but if I leave it as the default, it doesn't. Why, docker?

append

the pr step is better: https://github.com/moby/moby/pull/41679


hi I think this is not so difficult, because we can attach a attachable overlay network like below:

docker run --rm -ti --ip 10.0.5.22 --net overlay-network alpine ip -4 addr

so I think this is should be easy, I spend one day to make it work, finaly I did

the change is very very little, just add Addresses field in swarm.NetworkAttachmentConfig, you can see those commit:

there is a example docker-compose.yml

version: '3.7'

networks:

  default:
    attachable: true
    ipam:
      driver: default
      config:
        - subnet: 10.1.5.0/24

services:
  nginx:
    image: nginx:alpine@sha256:9b22bb6d703d52b079ae4262081f3b850009e80cd2fc53cdcb8795f3a7b452ee
    ports:
      # if you use `endpoint_mode: dnsrr`, you have this only way to publish port and only one container can publish port
      - { mode: host, protocol: tcp, target: 80, published: 80 }
    networks:
      default:
        ipv4_address: '10.1.5.6'
    deploy:  &deploy
      replicas: 1
     # this option should be not required, but I love it, it can make much thing easly
      endpoint_mode: dnsrr
      restart_policy: { condition: on-failure, max_attempts: 3 }
      # stop-first is required, it's default value, if you have changed that will let the task can't start succsseful
      update_config: { parallelism: 0, failure_action: rollback, max_failure_ratio: 1, order: stop-first }
  nginx2:
    image: nginx:alpine@sha256:9b22bb6d703d52b079ae4262081f3b850009e80cd2fc53cdcb8795f3a7b452ee
    deploy:  &deploy
      replicas: 1
      endpoint_mode: dnsrr

You can run the following command multiple times, and the nginx container ip will not change

docker stack deploy -c docker-compose.yml m
docker service update --force m_nginx

I'm really surprised to see that people have suggested such exotic solutions to the problem, yet noone thought of the simplest one:

If the sole problem that prevents us from using --ip and --ip6 in a swarm is the possibility of multiple replicas, then simply require an amount of static IPs equal to the number of replicas.

Possible example:

replicas: 4

ipv4_address: { 192.168.1.1, 192.168.1.2, 192.168.1.3, 192.168.1.4 }

Scaling up/down should assign/deassign the addresses sequentially.
ip6 could follow the same logic.

@thaJeztah

This does not work.
Because if the client does not support dynamic IP, then such client also will not support new IP address addition.

Example
1) service has 100 replicas, then you scaled service to one replica;
2) some clients not able to dynamically track 99 address must be not used more;
3) this client will get 99% of failed requests.

Could you explain in more detail what do you mean by client able to support dynamic IP?
In this example what constitutes a client and dynamic IP support?

Furthermore, I believe that the whole concept of defining static IPs, is to reserve them so nothing else can use them.
At this point 99 addresses must be not used anymore shouldn't matter, because one can scale it back to 100.
Conversely, these IPs must remain reserved regardless, unless the service configuration is changed.

Lastly, how does the same "issue" not occur, if those 99 addresses that must not be used anymore were originally handed out automatically by docker instead of being statically assigned?
If anything, not being able to reserve IPs can cause a subsequent scale-up to fail, as the IP pool may have been exhausted in the meantime.

In my use case, "client" is VoIP software that has its own DNS cache.
If docker swarm DNS sends a response with ttl 600, then such daemons cached this response for 600 seconds.
Then you change service replicas from 100 to 1, and VoIP clients get failed 99% of calls.

I not agree with you about the "static" IP definition as "reserved IP for future use". For me important traffic handling from clients that do not support any server IP change.

In your comments about limited IP pool size, you answered your question "why suggested a more complex approach for swarm service scaling".

What you describe is not a docker issue, it's an administration issue.

If you have a DNS service and some of the addresses it resolves dissapear, of course things are going to fail, unless you flush the DNS cache or wait the TTL out on your clients.
You did not answer to me how your clients are not going to fail, if you scale from 100 to 1 with addresses given automatically by docker.. your DNS is going to give a TTL for those too.
Furthermore, I would really like to know how are you going to configure your DNS, without knowing which IP is going be assigned (in a guaranteed fashion) to each of your services/replicas.

The reserved IP for future use statement is about the service configured with 100 replicas. If you scale down from 100 to 1, you are also allowed to scale up from 1 to 100. In this case, the IP addresses must be reserved; having designated static IPs facilitates that. If you don't plan on scaling back to 100, then you should change your service configuration and restart it.
If your clients handle important traffic and do not support any server IP change, then all the more you need static, otherwise you run the risk of a change in an IP that must not be changed; this is exactly why you reserve it.

What do those more complex approaches for swarm service scaling have to do with the IP pool size and how one plans out their network? Also, I did not inquire about complex approaches regarding swarm service scaling, but complex approaches pertaining to the functionality of being able to assign static IP addresses on a swarm. Pardon me, but I think we're having some comprehension issues here.

If you have a DNS service and some of the addresses it resolves dissapear, of course things are going to fail, unless you flush the DNS cache or wait the TTL out on your clients.

What do you want to solve? I have shown you, "suggest approach does not promise static IP for swarm service". Swarm service IP is still used dynamic IP from admin defined IP range.
Also, approach "let get something and then get a lot of false DNS responses that we admins (customers) must resolve on the client software" is not good. You wrote, now DNS cache flushing it SIP client issue.
That is not an approach.

You did not answer to me how your clients are not going to fail, if you scale from 100 to 1 with addresses given automatically by docker.. your DNS is going to give a TTL for those too.
Furthermore, I would really like to know how are you going to configure your DNS, without knowing which IP is going be assigned (in a guaranteed fashion) to each of your services/replicas.

I spoke about docker internal DNS server, not my DNS server. How is works docker internal DNS server please check by self.

Pardon me, but I think we're having some comprehension issues here.

You are correct comprehension issues here, so I will not spend more time on this discussion.

What do you want to solve? I have shown you, "suggest approach does not promise static IP for swarm service". Swarm service IP is still used dynamic IP from admin defined IP range.
Also, approach "let get something and then get a lot of false DNS responses that we admins (customers) must resolve on the client software" is not good. You wrote, now DNS cache flushing it SIP client issue.
That is not an approach.

What I want to solve is: being able to define static IPs for the container(s) of a swarm service.
The only option that we currently have is dynamic, since don't have an option to define static.
My example does not explicitly showcase a range, but a list of individual IP addresses. I also specified that addresses from the list should be assigned/deassigned sequentially; that means that replica1 should always get the first one, replica2 should always get the second, replica3 should always get the third, and so on. Where do you see the "dynamic" here?

I spoke about docker internal DNS server, not my DNS server. How is works docker internal DNS server please check by self.

It literally does not matter what kind of DNS we're talking about, the situation is no different. Because, regardless of how the swarm service scaling changes or how the functionality of being able to assign static IP addresses on a swarm gets implemented; docker can't do nothing to influence what is cached within your VoIP software (whether it's ran externally or within your swarm).
The false DNS responses that you're talking about, are due to the decision to scale down from 100 to 1, not due to the decision to utilize static IP addresses. And whether we're in a dynamic or static configuration, when we reduce the amount of replicas/servers by 99, we are going to have 99 non-existent records in our client(s) cache until the TTL expires or flush, no matter what. Unless you expect some magical solution which updates clients the exact moment a scale up/down happens, if so, that is the VoIP-software developer's responsibility and completely out of scope for this topic. It's like you're telling me that software which runs inside a virtual machine needs to be aware of what's going on in the hypervisor.
The docker internal DNS server should update it's own records as soon as the service gets scaled up or down immediately, therefore any new queries should be proper.

For the last time, I will repeat: This is NOT a docker swarm issue, this is an administration issue.

You are correct comprehension issues here, so I will not spend more time on this discussion.

Me neither.

@sergey-safarov What you describe is a bug in that VOIP client. It has nothing to do with the solution of defining static IPs @FrostbyteGR is proposing (Good job @FrostbyteGR. I like this idea. It makes perfect sense).
Any sanely written software should expect that when it gets more than one IP from DNS for a public hostname, that some of those IPs could lead to a dead end. Software that doesn't do retries in such case is really bad and should be fixed.
Also setting TTL to 600 when dealing with containers is not very smart to say the least ;-) You want as short a TTL as possible for your use-case especially in case you plan to scale your service up/down. Container could be easily rescheduled to other node in cluster with different IP and you will have the same issue...
Off-course scaling from 100 to 1 in one step is extreme case and would probably not work well with any kind of DNS based service discovery/balancing as you want to limit the number of retries to some sane number. This was just an example what is technically possible, not what is smart to do ;-)
In case you really need such dynamic service scaling, you have to look at better service discovery then just using poor-mans solution like DNS.

Let me provide another example.
Please imagine:
1) Google search host www.google.com will get resolved by the google DNS servers to 100 IP addresses with TTL 600 seconds;
2) then Google makes the decision to scale down HTTP servers to 50 containers;
3) Google DNS servers now respond to DNS requests with the new IP address value of HTTP servers.

From your point of you that is correct behavior. But not from mine.
Because all browsers in the world is know the IP addresses of 100 HTTP containers before scaling down and all browsers in the world will try to reach HTTP containers not available more and will get connection timeout errors.

From your point of view "bug located in browser client and need to fix all browsers in the world".
From my point of view if DNS informed the world host in the world that IP address server HTTP traffic for 600 seconds when all of this IP must work until DNS cache is expired on clients.

I want to ask you.
If we cannot scale down containers that sever Google site until expired cache in the browser clients, why you think SIP client must use other behavior?

In real-world DNS servers exposed IP address of load-balancers.To be closer to the real world please replace the term HTTP containers by load balancer containers.

Also to be closer to the real world usage, you can scale down from 2 containers to 1 container. In this case, you will get 50% of failed requests. So you simply cannot scale down any service because this requires to implement connection timeout handling .on the client-side.

Also setting TTL to 600 when dealing with containers is not very smart to say the least 

You can say this for all admins that use Kubernets (EKS, google and other clouds), All of these use 600 TTL for internal Kubernetes DNS server.

Software that doesn't do retries in such case is really bad and should be fixed.

Looks as you not have real service redundancy experience. Because errors like "connection timeout" also may break service. You can try open any web-server that hosted on two IP servers (addresses) and then shutdown one server.

Then you will describe to customers why this website does not work.

Also setting TTL to 600 when dealing with containers is not very smart to say the least 

@johny-mnemonic please show me how you can change the default TTL value of the embedded docker swarm DNS server.
And then show how to change the TTL values only for the SIP server and do not touch the default TTL value of the embedded swarm DNS server.

Then I will make a decision your approach is smart or does not.

Let me provide another example.
Please imagine:

1. Google search host [www.google.com](http://www.google.com) will get resolved by the google DNS servers to 100 IP addresses with TTL 600 seconds;

2. then Google makes the decision to scale down HTTP servers to 50 containers;

3. Google DNS servers now respond to DNS requests with the new IP address value of HTTP servers.

From your point of you that is correct behavior. But not from mine.
Because all browsers in the world is know the IP addresses of 100 HTTP containers before scaling down and all browsers in the world will try to reach HTTP containers not available more and will get connection timeout errors.

From your point of view "bug located in browser client and need to fix all browsers in the world".
From my point of view if DNS informed the world host in the world that IP address server HTTP traffic for 600 seconds when all of this IP must work until DNS cache is expired on clients.

I want to ask you.
If we cannot scale down containers that sever Google site until expired cache in the browser clients, why you think SIP client must use other behavior?

In real-world DNS servers exposed IP address of load-balancers.To be closer to the real world please replace the term HTTP containers by load balancer containers.

Also to be closer to the real world usage, you can scale down from 2 containers to 1 container. In this case, you will get 50% of failed requests. So you simply cannot scale down any service because this requires to implement connection timeout handling .on the client-side.

Allow me to introduce you to the standard operating procedure for such things:

  1. First you remove the records
  2. Then you wait for the TTL
  3. Then you scale down / remove IPs (optionally you can point those IPs temporarily to another host)

If you leave your DNS alive and you scale down / remove IPs, everything is going to fail, no matter if it's docker or whatever.
If you can't remove/manage records that are critical to you, because they originate from docker's internal DNS, you shouldn't be using a DNS service you cannot fully control in the first place. For such things you need your own DNS solution.

Also setting TTL to 600 when dealing with containers is not very smart to say the least 

You can say this for all admins that use Kubernets (EKS, google and other clouds), All of these use 600 TTL for internal Kubernetes DNS server.

Software that doesn't do retries in such case is really bad and should be fixed.

Looks as you not have real service redundancy experience. Because errors like "connection timeout" also may break service. You can try open any web-server that hosted on two IP servers (addresses) and then shutdown one server.

Then you will describe to customers why this website does not work.

To me it looks like you do not have real administration experience in general. Before moving on to concepts like service redundancy, scaling and containerization, you have to understand how all of the platform's underlying services work and then you also have to understand the services you're going to deploy yourself. If you don't know how to manage those services individually, you cannot possibly hope to manage them in a more complex environment. It doesn't matter where the DNS is; it's still a DNS and the same principles apply.

Also setting TTL to 600 when dealing with containers is not very smart to say the least 

@johny-mnemonic please show me how you can change the default TTL value of the embedded docker swarm DNS server.
And then show how to change the TTL values only for the SIP server and do not touch the default TTL value of the embedded swarm DNS server.

Then I will make a decision your approach is smart or does not.

Can you explain to me why you're on a holy war about DNS and its records' TTL, when these have nothing to do with my proposal of how static IP assignment could be implemented? The way I suggested it, is purely static. It's not a range, it's a list of individual elements that can be whatever you want them to be (as long as it makes sense). The way they should be assigned should be serial. That way you always know which replica gets which address on the list. And when you downscale or upscale, you still know which is which because that's also done in a serial fashion. Downscaling doesn't take random replicas down, a downscaling of 50 should take replicas 51-100 down. Upscaling doesn't generate replicas in random order, an upscaling of 25 should create replicas 51-75 in order. Then the matching (by index) array elements would be deassigned/assigned correspondingly. Everything should be predictable.

Stop derailing this topic.

Thanks @FrostbyteGR for clarifying it in a way that hopefully is clear to even someone who even think about using an internal Docker/Kube DNS for serving public records :astonished:
Also seems never have heard about "Design for failure and nothing will fail" principle when defending the client, that can't retry :-(

Anyway as stated previously I really like your design and would love to see it implemented in swarm.
I am just a bit afraid we are beating a dead horse here :cry:

Anyway as stated previously I really like your design and would love to see it implemented in swarm.

Also, to avoid confusion and really showcase why this is supposed to be static/predictable; I believe I should explain in more detailed examples, how my suggestion is supposed to work (I also added a range option to it + another _temporary solution_ suggestion):

Option 1. Require an amount of static IPs equal to that of the number of requested replicas

Individual assignment (aka: put in whatever you want, as long as it's valid):

replicas: 4

ipv4_address: { 192.168.1.17, 192.168.1.42, 192.168.1.23, 192.168.1.65 }

Case 1 - Deploying the stack
Replica 1 reserves and is assigned value on array index 0: 192.168.1.17
Replica 2 reserves and is assigned value on array index 1: 192.168.1.42
Replica 3 reserves and is assigned value on array index 2: 192.168.1.23
Replica 4 reserves and is assigned value on array index 3: 192.168.1.65

Case 2 - Downscaling to 2 replicas
Replica 4 is taken down, value on array index 3 remains reserved for Replica 4.
Replica 3 is taken down, value on array index 2 remains reserved for Replica 3.
Replica 2 remains as-is (up).
Replica 1 remains as-is (up).

Case 3 - Upscaling back to 3 replicas
Replica 1 remains as-is (up).
Replica 2 remains as-is (up).
Replica 3 is brought up, gets reassigned it's reserved value on array index 2: 192.168.1.23
Replica 4 remains as-is (down), value on array index 3 still remains reserved for Replica 4.

Case 4 - One or more replicas crashes
Replica 1 remains as-is (up).
Replica 2 crashes, upon recovery (assuming new container name myservice.2.6ca62afb74dc0d7af370ff49c6), gets reassigned it's reserved value on array index 1: 192.168.1.42
Replica 3 remains as-is (up).
Replica 4 crashes, upon recovery (assuming new container name myservice.4.1ae8d532b78a48cab0f8a6fc2), gets reassigned it's reserved value on array index 3: 192.168.1.65

Range assignment:

I ultimately decided to also include this as an option in my suggestion, because it can operate the same way.
Plus, it would be helpful not writing hundreds of IP addresses, on larger deployments.

replicas: 4

ipv4_address: { 192.168.1.21-192.168.1.24 }

The ipv4_address string gets translated into an array of 4 elements, but this time the values are in sequential order.
Resulting array from this range-input would be: { 192.168.1.21, 192.168.1.22, 192.168.1.23, 192.168.1.24 }

Case 1 - Deploying the stack
Replica 1 reserves and is assigned value on array index 0: 192.168.1.21
Replica 2 reserves and is assigned value on array index 1: 192.168.1.22
Replica 3 reserves and is assigned value on array index 2: 192.168.1.23
Replica 4 reserves and is assigned value on array index 3: 192.168.1.24

Case 2 - Downscaling to 2 replicas
Replica 4 is taken down, value on array index 3 remains reserved for Replica 4.
Replica 3 is taken down, value on array index 2 remains reserved for Replica 3.
Replica 2 remains as-is (up).
Replica 1 remains as-is (up).

Case 3 - Upscaling back to 3 replicas
Replica 1 remains as-is (up).
Replica 2 remains as-is (up).
Replica 3 is brought up, gets reassigned it's reserved value on array index 2: 192.168.1.23
Replica 4 remains as-is (down), value on array index 3 still remains reserved for Replica 4.

Case 4 - One or more replicas crashes
Replica 1 remains as-is (up).
Replica 2 crashes, upon recovery (assuming new container name myservice.2.6ca62afb74dc0d7af370ff49c6), gets reassigned it's reserved value on array index 1: 192.168.1.22
Replica 3 remains as-is (up).
Replica 4 crashes, upon recovery (assuming new container name myservice.4.1ae8d532b78a48cab0f8a6fc2), gets reassigned it's reserved value on array index 3: 192.168.1.24

--ip6 (and it's corresponding json option) should follow the same logic.

Option 2. Make --ip and --replicas arguments and corresponding json settings, mutually exclusive

This is best suited as a temporary solution to the problem and not a permanent one. Until a proper solution arrives.

  • This effectively means that you can have either or.
  • You can't have those arguments/json-options together for the definition or deployment of a service/stack.
  • Defining --replicas (or it's corresponding json option) should lock you out of the --ip argument (or it's corresponding json option).
  • Defining --ip (or it's corresponding json option) should lock you out of the --replicas argument (or it's corresponding json option), also enforces --replicas 1 (or replicas: 1 within the json).

Again, --ip6 (and it's corresponding json option) should follow the same logic.

Was this page helpful?
0 / 5 - 0 ratings