Compose: Connect nodes to an overlay network and the host network at the same time?

Created on 31 May 2016  路  27Comments  路  Source: docker/compose

Hi All,

I would like to connect a container to both the host network and a custom overlay network.

Background

We have a cluster of custom services that appears to the outside world as a single functional unit. I deploy these with docker-compose, and have successfully used an overlay network to make it easy to configure the communication between these services.

One of these services (call it api-node) also needs to register itself with a different service that is not deployed using docker. That external service needs to communicate with the api-node, so the api-node gives the external service its ip address during registration.

Currently, all of these services are deployed within AWS, and there is only 1 docker container running per AWS instance.

Problem

Since the api-node is not part of the "host" network, the only ip addresses it has are within the docker network. It does not have access to the AWS hosts ip. Thus, the external service cannot call back to the api-node, since it is not part of the docker network.

Any ideas?

I would like to have the api-node join both the docker overlay network we created and the host network. But I don't think this is supported by docker-compose.

When I tried configuration like this in my service

networks:
  - kiwinet
network_mode: "host"

I got an error like this

ERROR: 'network_mode' and 'networks' cannot be combined

When I tried configuration like this

...
  networks:
    - kiwinet
    - hostnetwork
...

networks:
  kiwinet:
    driver: overlay
    ipam:
      driver: default
      config:
        - subnet: 11.0.0.0/16
  hostnetwork:
    driver: host

I got an error like this

ERROR: Error response from daemon: only one instance of "host" network is allowed

When I tried configuration like this

networks:
  kiwinet:
    driver: overlay
    ipam:
      driver: default
      config:
        - subnet: 11.0.0.0/16
  hostnetwork:
    external:
      name: host

I got this error

ERROR: Network host declared as external, but could not be found. Please create the network manually using `docker network create host` and try again.
kinquestion stale swarm

Most helpful comment

I wrote script to create link between overly network and host network

Usage case: overlay2host.sh ${OVERLAYNET} ${IFNAME} ${IFIP}
#!/bin/sh -e

# Uncoment next line for debug mode
# set -x 

OVERLAYNET=$1
IFNAME=$2
IFIP=$3

get_netns_id() {
    local NETNS=$1
    docker inspect --format '1-{{.Id}}' ${NETNS} 2> /dev/null | grep -o -P '^\S{12}' || /usr/bin/true
}

get_netns_mask() {
    # Return mask of fisrt defined network
    local NETNS=$1
    docker inspect --format '{{index (split (index .IPAM.Config 0).Subnet "/") 1}}' ${NETNS} 2> /dev/null || /usr/bin/true
}

add_link() {
    local NETNS=$1
    local IFNAME=$2
    set +e
    ip netns exec ${NETNS} ip link add veth-${IFNAME} type veth peer name br-${IFNAME} 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot create veth pair \"br-${IFNAME}\" to \"veth-${IFNAME}\""
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip netns exec ${NETNS} ip link set veth-${IFNAME} netns 1 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot move container end of veth pair to default namespace"
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip netns exec ${NETNS} ip link set br-${IFNAME} up 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot set bridge end of veth pair to UP state"
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip netns exec ${NETNS} ip link set br-${IFNAME} master br0 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot link bridge end of veth pair to namespace bridge"
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    set -e
}

config_veth() {
    local OVERLAYNET=$1
    local NETNS=$2
    local IFNAME=$3
    local IFIP=$4
    local NETMASK=$(get_netns_mask ${OVERLAYNET})
    if [ -z ${NETMASK} ]; then
        echo "error: cannot get ip mask for network \"\${OVERLAYNET}\""
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    set +e
    ip addr change ${IFIP}/${NETMASK} dev veth-${IFNAME}
    if [ $? -ne 0 ]; then
        echo "error: cannot set ip addr on interface \"veth-${IFNAME}\""
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip link set veth-${IFNAME} up
    if [ $? -ne 0 ]; then
        echo "error: cannot set UP state for interface \"veth-${IFNAME}\""
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    echo "Assigned IP address ${IFIP} to ${IFNAME} interface"
}

if [ $# -eq 0 ]; then
    echo 'Script configures link between overlay network and host network'
    echo 'Usage case: overlay2host.sh ${OVERLAYNET} ${IFNAME} ${IFIP}'
    exit 0
fi

if [ -z ${OVERLAYNET} ]; then
    echo "error: name of overlay network cannot be empty"
    exit 1
fi

if [ -z ${IFNAME} ]; then
    echo "error: interface name cannot be empty"
    exit 1
fi

if [ -z ${IFIP} ]; then
    echo "error: interface ip cannot be empty"
    exit 1
fi

NETNS=$(get_netns_id ${OVERLAYNET})
if [ -z ${NETNS} ]; then
    echo "error: Cannot get net namespace id for \"${OVERLAYNET}\""
    exit 1
fi

mkdir -p /var/run/netns
rm -f /var/run/netns/${NETNS}
ln -s /var/run/docker/netns/${NETNS} /var/run/netns

add_link ${NETNS} ${IFNAME}
config_veth ${OVERLAYNET} ${NETNS} ${IFNAME} ${IFIP}
rm -f /var/run/netns/${NETNS}
exit 0

All 27 comments

Anyone figured this out yet?

@rmelick @aquavitae Overlay networking is meant to provide intra-cluster (east-west) communication between the services in the cluster. If the service must be accessible externally we recommend you use the port-mapping. One can configure ports on a compose service and that will automatically place those containers in both overlay (for east-west communication) and make it accessible externally (via the host port-mapping).

Infact with the upcoming docker 1.12 swarm-mode, we are making this functionality available via docker service and routing-mesh.

But for existing compose and overlay networking, you can use the above recommendation.

The specific case I have is using consul in docker (https://hub.docker.com/_/consul/). Consul is specifically recommended in the swarm docs (https://docs.docker.com/swarm/discovery/#using-a-distributed-key-value-store). Consul's image documentation recommends using host networking for performance reasons. The way we want to use it is to have all our apps running in an overlay network with no ports exposed and traffic routed through them via a reverse proxy, also running in a container, and using consul for service discovery. Based on your response it sounds like this is not possible, and in order to use consul we have to expose ports. Is this correct, or is there another way of achieving this?

Hi! I have similar issue: I want to use docker-compose and the host driver. the reason I want to use the host driver is because I am using mvn and I need to download the maven artifacts. the docker engine is installed in an Ubuntu 16 VM (virtual box); the Ubuntu vm network is bridged; from docker container that runs in the VM (and starts with the bridge network ) I can not update the artifacts or run a system update. I saw that if the container is using the host driver then I can update/install mvn artifacts.
This is the reason why I want to use the host driver with the services started by composer.

I have same issue, but instead of overlay network I'm using default one. (I basically need to have one container be connected to both host machine network and docker network).

I have almost same results as you, but in third scenario I am getting:

Container cannot be disconnected from host network or connected to host network

It's not possible to connect a container to both the host network and any other network. Port mapping is the recommended way to get traffic into a container which is connected to any network other than host.

It seems to manually you can do so using ip-netns (process network namespace management) http://www.nullzero.co.uk/openswitch-docker-linux-networking-part-2-build-a-network/

Port mapping won't work for my situation since my container is trying to egress to link-local addresses available to the host. Port mapping is for allowing ingress. If I use host networking, my container can see the link-locals. If I use a bridge, I cannot. I'm looking for a way to combine the two.

Port-mapping is not the same as host-mode.

In my use case also need connect container to host and overlay network.
I use FreeSwitch daemon that need publish more 16000 UDP ports. Also this not allow proxying traffic (required docker option "userland-proxy": false).

  • to prevent publishing huge amount of ports
  • to prevent double NAT (Amazon + Docker)

i want use host NIC for Internet traffic and overlay NIC for internal application daemons.

I wrote script to create link between overly network and host network

Usage case: overlay2host.sh ${OVERLAYNET} ${IFNAME} ${IFIP}
#!/bin/sh -e

# Uncoment next line for debug mode
# set -x 

OVERLAYNET=$1
IFNAME=$2
IFIP=$3

get_netns_id() {
    local NETNS=$1
    docker inspect --format '1-{{.Id}}' ${NETNS} 2> /dev/null | grep -o -P '^\S{12}' || /usr/bin/true
}

get_netns_mask() {
    # Return mask of fisrt defined network
    local NETNS=$1
    docker inspect --format '{{index (split (index .IPAM.Config 0).Subnet "/") 1}}' ${NETNS} 2> /dev/null || /usr/bin/true
}

add_link() {
    local NETNS=$1
    local IFNAME=$2
    set +e
    ip netns exec ${NETNS} ip link add veth-${IFNAME} type veth peer name br-${IFNAME} 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot create veth pair \"br-${IFNAME}\" to \"veth-${IFNAME}\""
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip netns exec ${NETNS} ip link set veth-${IFNAME} netns 1 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot move container end of veth pair to default namespace"
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip netns exec ${NETNS} ip link set br-${IFNAME} up 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot set bridge end of veth pair to UP state"
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip netns exec ${NETNS} ip link set br-${IFNAME} master br0 2> /dev/null
    if [ $? -ne 0 ]; then
        echo "error: cannot link bridge end of veth pair to namespace bridge"
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    set -e
}

config_veth() {
    local OVERLAYNET=$1
    local NETNS=$2
    local IFNAME=$3
    local IFIP=$4
    local NETMASK=$(get_netns_mask ${OVERLAYNET})
    if [ -z ${NETMASK} ]; then
        echo "error: cannot get ip mask for network \"\${OVERLAYNET}\""
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    set +e
    ip addr change ${IFIP}/${NETMASK} dev veth-${IFNAME}
    if [ $? -ne 0 ]; then
        echo "error: cannot set ip addr on interface \"veth-${IFNAME}\""
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    ip link set veth-${IFNAME} up
    if [ $? -ne 0 ]; then
        echo "error: cannot set UP state for interface \"veth-${IFNAME}\""
        ip netns exec ${NETNS} ip link delete br-${IFNAME}
        rm -f /var/run/netns/${NETNS}
        exit 1
    fi
    echo "Assigned IP address ${IFIP} to ${IFNAME} interface"
}

if [ $# -eq 0 ]; then
    echo 'Script configures link between overlay network and host network'
    echo 'Usage case: overlay2host.sh ${OVERLAYNET} ${IFNAME} ${IFIP}'
    exit 0
fi

if [ -z ${OVERLAYNET} ]; then
    echo "error: name of overlay network cannot be empty"
    exit 1
fi

if [ -z ${IFNAME} ]; then
    echo "error: interface name cannot be empty"
    exit 1
fi

if [ -z ${IFIP} ]; then
    echo "error: interface ip cannot be empty"
    exit 1
fi

NETNS=$(get_netns_id ${OVERLAYNET})
if [ -z ${NETNS} ]; then
    echo "error: Cannot get net namespace id for \"${OVERLAYNET}\""
    exit 1
fi

mkdir -p /var/run/netns
rm -f /var/run/netns/${NETNS}
ln -s /var/run/docker/netns/${NETNS} /var/run/netns

add_link ${NETNS} ${IFNAME}
config_veth ${OVERLAYNET} ${NETNS} ${IFNAME} ${IFIP}
rm -f /var/run/netns/${NETNS}
exit 0

@sergey-safarov Thank you for the script. Could you provide a simple example demonstrating above script functionality?

I use FreeSwitch daemon in docker container. FreeSwitch must be managed from other docker container in overlay network and i must publish FreeSwitch on overlay network (for mnagmetn daemon) and host network (for easy RTP stream handling).
To do this i execute commands

docker network create --driver overlay \
                      --attachable \
                      --subnet 192.168.30.0/24 \
                      --gateway 192.168.30.1 mng
docker run -t --rm=true \
              --log-driver=none \
              --name fs1 \
              --network host \
              safarov/freeswitch:1.8.2
/opt/bin/overlay2host.sh mng fs1 192.168.30.16

Last two commands may be called from systemd unit.

Hi @sergey-safarov,
I got the following error:
cannot create veth pair "br-app-stack_media.1.9bzrldtbntab51qr2pvxnsk29" to "veth-app-stack_media.1.9bzrldtbntab51qr2pvxnsk29"

i got it after crate swarm with media service on the host net.
i want to create the swarm then connect the media service also to the overlay net.

Please try this overlay2host.sh script

Usage example freeswitch-docker.service

This unit copies network IP and MAC address from other container started from safarov/fakehost.
fakehost-docker.service

that what i did:

  1. crate swarm stack called 'app' ( one of the services (media) uses the host net)
  2. run your script overlay2host.sh

actually i want to connect the media service (all the containers/tasks of media) to the overlay net, and to be able to get all the goodies of swarm inside the overlay services (e.g dns resolve etc..) and to make it feel like a part of the overlay net

error msg:
cannot create veth pair "br-app-stack_media.1.9bzrldtbntab51qr2pvxnsk29" to "veth-app-stack_media.1.9bzrldtbntab51qr2pvxnsk29"

@sergey-safarov

can you advice please?

error msg from this script
kazoo-configs-docker/scripts/overlay2host.sh

error: cannot create veth pair "br-app-stack_media-network-overlay" to "veth-app-stack_media-network-overlay"

seems like it impossible to connect swarm host container to the overlay net

Hi @nadavsky
Usage example

wget https://raw.githubusercontent.com/sergey-safarov/kazoo-configs-docker/master/scripts/overlay2host.sh
chmod 755 overlay2host.sh
docker run -d --network ${YOUR_SWARM_NET} --name fakehost --cap-add=NET_ADMIN safarov/fakehost
docker run -d --network host --name mystaff alpine sleep 100000000000
./overlay2host.sh fakehost mystaff

To check host NIC connected to overlay network

us-west-sw1 tmp # ifconfig veth-mystaff
veth-mystaff: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.8  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::42:c0ff:fea8:1e08  prefixlen 64  scopeid 0x20<link>
        ether 02:42:c0:a8:1e:08  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 796 (796.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

To shutdown mystaff container

docker stop mystaff
docker rm mystaff
ip link delete veth-mystaff

@sergey-safarov

thanks your detailed response!
it works good now, i found that in swarm mode because of the pattern of the container name (-task_) there is an issue to create vet net from this pattern in your code,

i changed your script and set a vet name as addition param without special characters and it worked great.

another important thing,
the behavior of the host containers is that you can connect with them only from the swarm containers that are on the same node, and in order to connect host container from a swarm container on another node you should use a proxy solution.

@sergey-safarov
in swarm mode cap-add=NET_ADMIN is not supported and i can't run the overlay2host inside the container.
how can i bypass it and run in inside the container?

overlay2host is designed to tun on host, not inside container.

@nadavsky @sergey-safarov Im trying to achieve the same thing, can you share your script with the veth name fix. Im trying to run a swarm service which needs to be connected to the swarm overlay and the host for outgoing traffic. If i understand correctly.. using a freeswitch container as a switch which bridges the host network that my container is in and the overlay which is connected to the freeswitch.. can i achieve this with swarm IPv6?

At present time overlay2host.sh script located at repo https://github.com/sergey-safarov/kazoo-configs-docker.
Please create ticket and describe your issue in this repo.

excellent solution, also solve #919

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

This issue has been automatically closed because it had not recent activity during the stale period.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

davidbarratt picture davidbarratt  路  3Comments

bergtwvd picture bergtwvd  路  3Comments

29e7e280-0d1c-4bba-98fe-f7cd3ca7500a picture 29e7e280-0d1c-4bba-98fe-f7cd3ca7500a  路  3Comments

bitver picture bitver  路  3Comments

HackerWilson picture HackerWilson  路  3Comments