Moby: Unable to remove network "has active endpoints"

Created on 20 Oct 2015  Â·  59Comments  Â·  Source: moby/moby

Not to sure if this belongs in this repo or libnetwork.

docker version: Docker version 1.9.0-rc1, build 9291a0e
docker info:

Containers: 0
Images: 5
Engine Version: 1.9.0-rc1
Storage Driver: devicemapper
 Pool Name: docker-253:0-390879-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 2.023 GB
 Data Space Total: 107.4 GB
 Data Space Available: 11.62 GB
 Metadata Space Used: 1.7 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.146 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.14.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 1.797 GiB
Name: carbon1.rmb938.com
ID: IAQS:6E74:7NGG:5JOG:JXFM:26VD:IAQV:FZNU:E23J:QUAA:NI4O:DI3S

uname -a: Linux carbon1.rmb938.com 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

List the steps to reproduce the issue:

  1. Create a network with a remote driver
  2. Run a container connected to the network
  3. Kill and Remove the container
  4. Remove the network

Describe the results you received:

If the remote network driver gives an error when processing /NetworkDriver.Leave docker still kills and removes the container but does not remove the endpoint. This allows docker's internal db to think that the endpoint still exists even though the container is removed.

When you try and remove the network this error is returned

docker network rm net1      
Error response from daemon: network net1 has active endpoints

Describe the results you expected:

Docker should not be allowed to kill or remove the container if /NetworkDriver.Leave returned an error.

arenetworking

Most helpful comment

@keithbentrup This is a stale endpoint case. Do you happen to have the error log when that container that was originally removed (which left the endpoint in this state).
BTW, if the container is removed, but the endpoint is still seen, then one can force disconnect the endpoint using docker network disconnect -f {network} {endpoint-name} . You can get the endpoint-name from the docker network inspect {network} command.

All 59 comments

This issue seems to be very intermittent and does not happen very often.

@rmb938 we had a few issues with dangling endpoints and has been addressed via #17191. RC2 should have a fix for that (or the latest master). For RC1 testers (huge thanks), we might need an additional workaround to cleanup the states before starting RC2. we will update with proper docs.

Awesome. Thanks.

@mavenugo I just repro'd this in 1.10.0:

seems that #17191 wasn't a complete fix...

Do you have a work around? Even bouncing the docker daemon doesn't seem to resolve things.

(and let me know if I can get you more debug info, its still repro'ing on my machine)

I also just reproduced this in 1.10.3 and landed here via google looking for a work around. I can't force disconnect the active endpoints b/c none of the containers listed via docker network inspect still exist.

I eventually had to recreate my consul container and restart the docker daemon.

ping @mavenugo do you want this issue reopened, or prefer a new issue in case it has a different root cause?

Clarification, docker 1.10.1

Client:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.4.3
 Git commit:   9e83765
 Built:        Fri Feb 12 12:41:05 2016
 OS/Arch:      linux/arm

Server:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.4.3
 Git commit:   9e83765
 Built:        Fri Feb 12 12:41:05 2016
 OS/Arch:      linux/arm

Let me reopen this for investigation

Madhu, assigned you, but feel free to reassign, of point to the related workaround if it's there already :smile:

@keithbentrup @brendandburns thanks for raising the issue. Couple of questions

  1. Are you using any multi-host network driver (such as Overlay driver). Can you please share the docker network ls output.
  2. If you dont' use multi-host driver, then can you please share the /var/lib/docker/network/files/local-kv.db file (via some file-sharing website) and which network are you trying to remove ? And how was the network originally created ?

FYI. for a multi-host network driver, docker maintains the endpoints for a network across the cluster in the KV-Store. Hence, if any host in that cluster still has an endpoint alive in that network, we will see this error and this is an expected condition.

@thaJeztah PTAL my comment above and based on the scenario, this need not be a bug. am okay to keep this issue open if that helps.

@mavenugo Yes, I'm using the overlay driver via docker-compose with a swarm host managing 2 nodes.

When I docker network inspect the network on each individual node, 1 node had 1 container listed that no longer existed and so could not be removed by docker rm -fv using the container name or id.

@keithbentrup This is a stale endpoint case. Do you happen to have the error log when that container that was originally removed (which left the endpoint in this state).
BTW, if the container is removed, but the endpoint is still seen, then one can force disconnect the endpoint using docker network disconnect -f {network} {endpoint-name} . You can get the endpoint-name from the docker network inspect {network} command.

@brendandburns can you please help reply to https://github.com/docker/docker/issues/17217#issuecomment-195739573 ?

@mavenugo sorry for the delay. I'm not using docker multi-host networking afaik. Its a single node raspberry pi and I haven't done anything other than install docker via hypriot.

Here's the output you requested (network is the network I can't delete)

$ docker network ls
NETWORK ID          NAME                DRIVER
d22a34456cb9        bridge              bridge              
ef922c6e861e        network             bridge              
c2859ad8bda4        none                null                
150ed62cfc44        host                host 

The kv file is attached, I had to name it .txt to get around github filters, but its the binary file.

local-kv.db.txt

I created the network via direct API calls (dockerode)

This has worked (create and delete) numerous times, I think in this instance, I docker rm -f <container-id> but I'm not positive, I might have power-cycled the machine...

Hope that helps.
--brendan

@mavenugo If by docker network disconnect -f {network} {endpoint-name} you mean docker network disconnect [OPTIONS] NETWORK CONTAINER per docker network disconnect --help, I tried that, but it complained (not surprisingly) with No such container.

If you meant the EndpointID instead of the container name/id, I did not try that (but will next time) because that's not what the --help suggested.

@keithbentrup i meant the -f option which is available in v1.10.x. Force option also considers endpoint-name from other nodes in the cluster as well. Hence, my earlier instructions will work just fine with -f option if you are using docker v1.10.x.

@brendandburns thanks for the info and it is quite useful to narrow down the issue. There is a stale reference to the endpoint which is causing this issue. The stale reference is most likely caused by the power-cycle when the endpoints were being cleaned up. we will get this inconsistency issue resolved in 1.11.

@mavenugo glad it helped. In the meantime, if I blow away that file, will things still work?

thanks
--brendan

@brendandburns yes. pls go ahead. it will just work fine for you.

@mavenugo I think you misunderstood me. I was using the -f option (verified in my shell history) on v1.10.x but with the container id (not the endpoint id) b/c that's what help suggests (the container not the endpoint). If it's meant to work with either the container id or endpoint id, then it's a bug b/c it certainly does not disconnect with the container id and the -f option when the container no longer exists.

I was able to recreate a condition when trying to remove docker_gwbridge that might alleviate some of the confusion.
When I used the docker client pointing to a swarm manager, I experienced this output:

~/D/e/m/compose (develop) $ docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "83dfeb756951d3d175e9058d0165b6a4997713c3e19b6a44a7210a09cd687d54",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1/16"
                }
            ]
        },
        "Containers": {
            "41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f": {
                "Name": "gateway_41ebd4fc365a",
                "EndpointID": "1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        }
    }
]
~/D/e/m/compose (develop) $ docker network disconnect -f docker_gwbridge 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f
Error response from daemon: No such container: 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f
~/D/e/m/compose (develop) $ docker network disconnect -f docker_gwbridge 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c
Error response from daemon: No such container: 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c
~/D/e/m/compose (develop) $ docker network rm docker_gwbridge
Error response from daemon: 500 Internal Server Error: network docker_gwbridge has active endpoints

I first tried removing the container by container name (not shown), then by id, then by container endpoint id. None were successful. Then I logged onto the docker host, and used the local docker client to issue commands via the docker unix socket:

root@dv-vm2:~# docker network disconnect -f docker_gwbridge 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f
Error response from daemon: endpoint 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f not found
root@dv-vm2:~# docker network disconnect -f docker_gwbridge 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c
Error response from daemon: endpoint 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c not found
root@dv-vm2:~# docker network rm docker_gwbridge
Error response from daemon: network docker_gwbridge has active endpoints
root@dv-vm2:~# docker network disconnect -f docker_gwbridge gateway_41ebd4fc365a
root@dv-vm2:~# docker network rm docker_gwbridge
root@dv-vm2:~# docker network inspect docker_gwbridge
[]
Error: No such network: docker_gwbridge

1) Notice the output from swarm vs direct docker client: swarm refers to containers; docker refers to endpoints. That should probably be made consistent.
2) The only successful option was providing an endpoint name (not container name or id, or endpoint id). The --help should clear that up or multiple inputs should be made acceptable.
3) I did not test endpoint name with swarm, so I don't know if that would have worked.

@keithbentrup thats correct. as I suggested earlier. the docker network disconnect -f {network} {endpoint-name} ... pls use the endpoint-name. We can enhance this to support endpoint-id as well. But I wanted to confirm that by using the force option, were you able to make progress.

@mavenugo but what you suggest is not what the help says. furthermore it lacks the consistency of the most cmds where id/name are interchangeable.

unless others find this thread, others will repeat this same issue, so before adding support for endpoint-id, fix the --help.

@keithbentrup we will fix both the --help and the functionality.

I've just reproduced this issue with docker v1.11.2 while attempting docker-compose down.
An earlier attempt to run docker-compose down did close the app_front network.

$ docker-compose down
Removing network app_front
WARNING: Network app_front not found.
Removing network app_back
ERROR: network app_back has active endpoints
$ docker network inspect app_back                                                            
[                                                                                                    
    {                                                                                                
        "Name": "app_back",                                                                  
        "Id": "4a8d557eda7ce06d222fc0a9053069f44e75d25147300796686522a872261245",                    
        "Scope": "local",                                                                            
        "Driver": "bridge",                                                                          
        "EnableIPv6": false,                                                                         
        "IPAM": {                                                                                    
            "Driver": "default",                                                                     
            "Options": null,                                                                         
            "Config": [                                                                              
                {                                                                                    
                    "Subnet": "172.22.0.0/16",                                                       
                    "Gateway": "172.22.0.1/16"                                                       
                }                                                                                    
            ]                                                                                        
        },                                                                                           
        "Internal": false,                                                                           
        "Containers": {                                                                              
            "702e9916e86b7f77af363014134f160a8dcd189399719e062069c10f735cb927": {                    
                "Name": "app_db_1",                                                          
                "EndpointID": "1decedbca5bc704be84f19e287926361d196d20fe2a9bbf092ab15b37b856b3a",    
                "MacAddress": "02:42:ac:16:00:02",                                                   
                "IPv4Address": "172.22.0.2/16",                                                      
                "IPv6Address": ""                                                                    
            }                                                                                        
        },                                                                                           
        "Options": {},                                                                               
        "Labels": {}                                                                                 
    }                                                                                                
]                                                                                                    

docker info

Containers: 17                                                                                   
 Running: 1                                                                                      
 Paused: 0                                                                                       
 Stopped: 16                                                                                     
Images: 140                                                                                      
Server Version: 1.11.2                                                                           
Storage Driver: aufs                                                                             
 Root Dir: /mnt/sda1/var/lib/docker/aufs                                                         
 Backing Filesystem: extfs                                                                       
 Dirs: 245                                                                                       
 Dirperm1 Supported: true                                                                        
Logging Driver: json-file                                                                        
Cgroup Driver: cgroupfs                                                                          
Plugins:                                                                                         
 Volume: local                                                                                   
 Network: bridge null host                                                                       
Kernel Version: 4.4.12-boot2docker                                                               
Operating System: Boot2Docker 1.11.2 (TCL 7.1); HEAD : a6645c3 - Wed Jun  1 22:59:51 UTC 2016    
OSType: linux                                                                                    
Architecture: x86_64                                                                             
CPUs: 1                                                                                          
Total Memory: 1.955 GiB                                                                          
Name: default                                                                                    
ID: LKRP:E2TX:KNVZ:UD4M:FIGG:ZROO:CIA5:WBKH:RNUB:KXTQ:E6DC:545P                                  
Docker Root Dir: /mnt/sda1/var/lib/docker                                                        
Debug mode (client): false                                                                       
Debug mode (server): true                                                                        
 File Descriptors: 18                                                                            
 Goroutines: 38                                                                                  
 System Time: 2016-06-15T22:44:13.34779866Z                                                      
 EventsListeners: 0                                                                              
Username: tohagan                                                                                
Registry: https://index.docker.io/v1/                                                            
Labels:                                                                                          
 provider=virtualbox                                                                             

I have some issues when try to disconnect swarm overlay endpoints,

Error response from daemon: network es-swarm-overlay has active endpoints

@rmb938 please say what wrong? may be have other issue with this questions?

@mavenugo

docker network disconnect -f  [Network-Name] [Endpoint-Name] 

This worked for me .

I may have the same problem with docker 1.13.0.

Since no-one in this thread have given an example of just what I did, I'll post it.

For completes, this is the error that starts it. It may be because I have codekitchen/dinghy-http-proxy:2.5.0 which listens on port 80.

$ docker-compose -f deploy/docker-compose/docker-compose.yml
Creating network "dockercompose_default" with the default driver
Creating dockercompose_front-end_1
# and so on..

ERROR: for edge-router  Cannot start service edge-router: driver failed programming external connectivity on endpoint dockercompose_edge-router_1 (3ed8fb6cf4bc221dce615a9a3c5b8e4f0f8332e00e6c6d9b9f9bf0b09da57b36): Bind for 0.0.0.0:80 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

And trying to bring it all down:

$ docker-compose -f deploy/docker-compose/docker-compose.yml down
Stopping dockercompose_front-end_1
# and so on..
ERROR: network dockercompose_default has active endpoints

And how I killed the network:

$ docker network inspect dockercompose_default
[
    {
        "Name": "dockercompose_default", # <--- Param 1
        "Id": "dd1326487a637df8a4a7a11856864a0059fca45cb63e8363bfe5196082d42d6e",
        "Created": "2017-02-08T00:22:41.341653339Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "ea7a142c113700145e894c950b18fd4dec8a53e04a45045f1fb71c47eae1a13b": {
                "Name": "dinghy_http_proxy", # <--- Param 2
                "EndpointID": "38f362af8b22e575cc987f68399a97f3ed10abf2c4cc365460dba768f2df8daa",
                "MacAddress": "02:42:ac:12:00:0d",
                "IPv4Address": "172.18.0.13/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
$ docker network disconnect -f dockercompose_default dinghy_http_proxy
$ docker network rm dockercompose_default
dockercompose_default

@nicolaiskogheim has a valid solution. However, my team has a docker-compose file with ~20 containers. So I found another solution.

I'd like to throw in that you can also restart the docker daemon (e.g. systemctl restart docker on centos) and then the links between network and containers will be gone. You can then docker system prune -f with success.

@mdotson @nicolaiskogheim please open a new issue; although the error message is the same, the original issue discussed here was fixed. Are you only seeing this when using docker compose? In that case, it could also be an issue in the order in which docker compose performs actions?

@thaJeztah Only with docker-compose. I had it occur only once when my Jenkins box ran out of memory and I barely managed to killed the docker containers. Perhaps there wasn't enough memory to allocate to remove the links between the containers and the network?

Not sure, but either way, I think most people google an error message and will arrive here looking for some commands to copy and paste to fix their problem.

I had the same problem as @nicolaiskogheim and @mdotson, my influxdb container run out of memory and became unhealthy. I couldn't remove stop it or remove ( I managed to remove with force mode ).
After that I was trying to start once again docker with docker-compose:

# docker-compose -f /etc/docker/docker-compose.yml up -d
Creating influxdb1

ERROR: for influxdb  Cannot start service influxdb: service endpoint with name influxdb1 already exists
ERROR: Encountered errors while bringing up the project.

than tried to delete network:

# docker network rm 834ea759c916
Error response from daemon: network docker_default has active endpoints

and than I tried @nicolaiskogheim solution:

# docker network disconnect -f docker_default influxdb1
Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 06:50:14 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 06:50:14 2017
 OS/Arch:      linux/amd64
 Experimental: false
docker-compose version 1.11.1, build 7c5d5e4
docker-py version: 2.0.2
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016

docker service restart fixed the problem for me.

sudo service docker restart

docker network rm <network name>

I'm seeing the same problem when attempting to remove a stack:

> sudo docker stack rm my-stack
Removing network my-stack_default
Failed to remove network g0450dknntdsfj1o055mk4efm: Error response from daemon: network my-stack_default has active endpointsFailed to remove some resources

I had first created the stack like so:

sudo docker stack deploy -c docker-compose.yml --with-registry-auth my-stack

Am using this version:

Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64
 Experimental: false

Luckily, sudo service docker restart fixes it, but still not ideal behavior.

Encountered it in 17.07.0-ce, disconnect approach did not work, then restarted docker and run rm again with success.

I've run into this with a 17.06-ce swarm cluster too; running out of options besides rebooting.

sudo service docker restart gets rid of it for me on ubuntu, allowing me to deploy & start my containers again.

Also works if one of the containers refuses to get killed (happens more than I'd hope). Annoying as it makes me bring all services down because of one mischievous container.

Having this problem also in 17.09.0-ce. Reopen this!

This was happening to me a lot in a low-memory environment. See if adding memory will make it any better, my processes stop normally now.

@tomholub Nope, memory is not the issue. But restarted docker service, then I could remove the network.

Still having this issue from time to time when trying to stop and remove actively working container. (Docker for Mac Version 17.09.0-ce-mac35 (19611) Channel: stable a98b7c1b7c)

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:09 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:45:38 2017
 OS/Arch:      linux/amd64
 Experimental: false
$ uname -a
Darwin Alexei-Workstation.local 16.7.0 Darwin Kernel Version 16.7.0: Wed Oct  4 00:17:00 PDT 2017; root:xnu-3789.71.6~1/RELEASE_X86_64 x86_64

It usually goes away if I wait random amount of seconds though. But it's still there.

BTW. For me it happened during docker-compose down --volumes --remove-orphans

still seeing these "orphaned networks" can you reopen @rmb938 @thaJeztah

Error response from daemon: network abcd_default id 3f2f1a6cb1cee2d82f2e2e59d10a099834a12b18eb7e3e48d6482d758bd93617 has active endpoints

docker version
Client:
 Version:      17.06.0-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:23:31 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.0-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:19:04 2017
 OS/Arch:      linux/amd64

only way to prune them seems to be restarting the engine

Good luck today

docker-compose down
Removing network gen365cms_default
ERROR: network gen365cms_default id b6c51b1a83ee2b938ee1c7f7148347dc9ef80a8d8ed93334873f1f84b3f27c04 has active endpoints
docker version
Client:
 Version:   17.12.0-ce-rc4
 API version:   1.35
 Go version:    go1.9.2
 Git commit:    6a2c058
 Built: Wed Dec 20 15:53:52 2017
 OS/Arch:   darwin/amd64

Server:
 Engine:
  Version:  17.12.0-ce-rc4
  API version:  1.35 (minimum version 1.12)
  Go version:   go1.9.2
  Git commit:   6a2c058
  Built:    Wed Dec 20 15:59:49 2017
  OS/Arch:  linux/amd64
  Experimental: true

This is still reproducable on Docker version 18.06.1-ce, build e68fc7a
Seems that even when the containers of a compose file are removed, their endpoints are sometimes not removed, which can sometimes happen on power loss, so composes fail to either completely start or completely remove.

When no command works then do
sudo service docker restart
your problem will be solved

Or sudo reboot -f. Works 100%.

i had a similar issue today. what i did was i ran "docker container ls -a" and saw few containers still running were utilizing the network which i had launched via docker stack. so manually when i killed those containers i was able to delete the network

I believe I just ran into the issue that @danwdart mentioned here. I am on Docker version 18.09.2, build 6247962 and I ran docker-compose -f $PATH_TO_MY_CONFIG down, and I received the following error:

ERROR: error while removing network: network michaelmoore_default id 6838b92e60a83f53c5637065e449f9124a2f297c482f1a7326cf247bfd38f70c has active endpoints

I actually let my laptop battery die last night, which I rarely do, and after restarting docker I was able to run the same compose "down" command with success.

This may be obvious to some, but it wasn't to me, just thought I'd share.

I needed to just run docker-compose rm - docker-compose down is what I normally do and ps -a showed no containers so this really tripped me up until I ran the rm cmd. Thought I'd share.

I end up with the same problem, as the network was not able to remove with all acctions , noting helped. my version is Docker version 18.09.6, build 481bc77

To fix, i have restart docker services. by "sudo service docker restart" after that i able to remove with "docker network rm {network}"

@danwdart Another reason for this is when there are dangling containers. in order to remove them, use the command docker-compose down --remove-orphans that should do the trick.

Hello from 2019, @mavenugo I would like to pass on my sincere, sincere thanks for having the solution on this one back in 2016.

This is still an issue after more than four years. Is there some simpler way of disconnecting every container from every network they are connected to than a 10+-line shell script? FWIW this seems to work:

#!/usr/bin/env bash

set -o errexit -o nounset -o pipefail

trap 'rm --recursive "$workspace"' EXIT
workspace="$(mktemp --directory)"
error_log="${workspace}/error.log"

for container_id in $(docker ps --all --quiet)
do
    readarray -t network_names < <(docker inspect "$container_id" | jq --raw-output '.[] | .NetworkSettings.Networks | if . == null then empty else keys | .[] end')
    for network_name in "${network_names[@]}"
    do
        echo "Disconnecting container ${container_id} from network ${network_name}."
        exit_code=0
        docker network disconnect "$network_name" "$container_id" 2> "$error_log" || exit_code="$?"
        if [[ "$exit_code" -ne 0 ]]
        then
            if grep --fixed-strings --quiet --regexp 'not connected to network' --regexp 'not connected to the network' "$error_log"
            then
                echo 'Ignoring "not connected" error…'
            else
                cat "$error_log" >&2
                exit "$exit_code"
            fi
        fi
    done
done

In summary:

  1. Set up a trap to remove the workspace at exit.
  2. Create the workspace.
  3. For each container:

    1. For each network the container is associated with:



      1. Try to disconnect.


      2. If the disconnect fails because it is already not connected to the network, ignore the error (unfortunately it seems to be random whether "the" is part of that error message). Otherwise fail.



A combination of the @mavenugo solution and docker network prune after disconnecting everything from the network works for me.

Another thank you @mavenugo here from 2020

@mavenugo If by docker network disconnect -f {network} {endpoint-name} you mean docker network disconnect [OPTIONS] NETWORK CONTAINER per docker network disconnect --help, I tried that, but it complained (not surprisingly) with No such container.

If you meant the EndpointID instead of the container name/id, I did not try that (but will next time) because that's not what the --help suggested.

@keithbentrup - The {endpoint-name} in above command is basically the container-id/name in the output get from below command:

$deminem: docker network inspect e60b9386b9e2 where e60b9386b9e2 is network-id.

[
    {
        "Name": "project-name-master_default",
        "Id": "e60b9386b9e20f5222513bd6166f6d8e3224e72e906e2b07376e88ba79d87b26",
        "Created": "2020-04-02T18:48:29.2694181Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "d435c36e882ec91dff780c55c0399c52b14096baea402647eaff2f1593602df9": {
                **"Name": "project-name-master_monitoring_1"**,
                "EndpointID": "7838e98efd8be4cabccc778707efadbb6194cbd73dc907f0129ee8b9119e4349",
                "MacAddress": "02:42:ac:12:00:0e",
                "IPv4Address": "172.18.0.14/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "project-name",
            "com.docker.compose.version": "1.25.4"
        }
    }
]

Note: As highlighted bold. "Name": "project-name-master_monitoring_1".

Just had it with

docker --version
Docker version 19.03.12-ce, build 48a66213fe
uname -a
Linux jotunheim 5.8.5-arch1-1 #1 SMP PREEMPT Thu, 27 Aug 2020 18:53:02 +0000 x86_64 GNU/Linux

on Arch. Service restart helped.

Was this page helpful?
0 / 5 - 0 ratings