Output of docker version
:
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:11 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:11 2016
OS/Arch: linux/amd64
Output of docker info
:
Containers: 87
Running: 31
Paused: 0
Stopped: 56
Images: 55
Server Version: 1.11.2
Storage Driver: overlay
Backing Filesystem: xfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.5.1-1.el7.elrepo.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.797 GiB
Name: bridge.datanet.ria
ID: HKGW:2SMN:VJFA:XALB:4ETF:ZZE7:OUQJ:GVHX:SXOM:U6PY:EQLR:3P27
Docker Root Dir: /mnt/docker-data
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Additional environment details (AWS, VirtualBox, physical, etc.):
Private cloud with VMWARE hypervisor, running CentOS7.
Steps to reproduce the issue:
Describe the results you received:
Jun 8 05:12:48 bridge docker: time="2016-06-08T05:12:48.799299085+02:00" level=error msg="Clean up Error! Cannot destroy container ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632: No such container: ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632"
Jun 8 05:12:48 bridge docker: time="2016-06-08T05:12:48.856161501+02:00" level=error msg="Handler for POST /v1.22/containers/create returned error: device or resource busy"
Jun 8 09:56:45 bridge docker: time="2016-06-08T09:56:45.266066521+02:00" level=error msg="Handler for POST /v1.22/containers/create returned error: Conflict. The name \"/my-redacted-data-container\" is already in use by container ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632. You have to remove (or rename) that container to be able to reuse that name."
Jun 8 10:35:42 bridge docker: time="2016-06-08T10:35:42.523718617+02:00" level=error msg="Handler for DELETE /v1.23/containers/ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632 returned error: No such container: ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632"
Jun 8 10:37:39 bridge docker: time="2016-06-08T10:37:39.492129195+02:00" level=error msg="Handler for DELETE /v1.23/containers/my-redacted-data-container returned error: No such container: my-redacted-data-container"
Jun 8 10:49:39 bridge docker: time="2016-06-08T10:49:39.924944312+02:00" level=error msg="Handler for DELETE /v1.23/containers/my-redacted-data-container returned error: No such container: my-redacted-data-container"
Jun 8 10:50:03 bridge docker: time="2016-06-08T10:50:03.114422404+02:00" level=error msg="Handler for DELETE /v1.23/containers/ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632 returned error: No such container: ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632"
Jun 8 11:03:29 bridge docker: time="2016-06-08T11:03:29.425100332+02:00" level=error msg="Handler for POST /v1.22/containers/create returned error: Conflict. The name \"/my-redacted-data-container\" is already in use by container ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632. You have to remove (or rename) that container to be able to reuse that name."
Jun 8 11:31:38 bridge docker: time="2016-06-08T11:31:38.704053754+02:00" level=error msg="Handler for POST /v1.23/containers/my-redacted-data-container/rename returned error: No such container: my-redacted-data-container"
Jun 8 11:31:49 bridge docker: time="2016-06-08T11:31:49.934637125+02:00" level=error msg="Handler for DELETE /v1.23/containers/my-redacted-data-container returned error: No such container: my-redacted-data-container"
Jun 8 11:31:51 bridge docker: time="2016-06-08T11:31:51.939043806+02:00" level=error msg="Handler for DELETE /v1.23/containers/my-redacted-data-container returned error: No such container: my-redacted-data-container"
Describe the results you expected:
Expect the cleaning process to clean everything and not receive:
ERROR: for my-redacted-data-container Conflict. The name "/my-redacted-data-container" is already in use by container ecb293bb1fad3948d9a7366f931a001b7abcbd9c9aefdf27c530be7a4b4cc632. You have to remove (or rename) that container to be able to reuse that name.
Additional information you deem important (e.g. issue happens only occasionally):
Issue is happening frequently, every week or depending on the number of changes and integrations, even twice a week.
Cleaning the context again doesn't solve the problem, not even restarting docker, the only solution is the stop docker, remove all contents of /var/lib/docker/*
(/mnt/docker-data in my case), and start docker.
how did you clean those containers ? Any exceptions happen why you clean those resources(include volume network etc.)?
I have a helper function to nuke everything so that our Continuous blah, cycle can be tested, erm... continuously. Basically it boils down to the following:
To clear containers:
docker rm -f $(docker ps -a -q)
To clear images:
docker rmi -f $(docker images -a -q)
To clear volumes:
docker volume rm $(docker volume ls -q)
To clear networks:
docker network rm $(docker network ls | tail -n+2 | awk '{if($2 !~ /bridge|none|host/){ print $1 }}')
I have a swarm cluster where containers are being brought up and down a lot for ci purposes and I have the same problem. In my case I don't need to restart the machine though, usually killing all containers with
$ docker rm -f $(docker ps -a -q)
then restarting docker
$ sudo service docker restart
and then recreating the swarm fixes it.
Here's the log of a typical failure. I use ansible to run docker compose commands on one of the swarm nodes against the swarm.
TASK: [Run docker-compose up] *************************************************
failed: [XX.XX.XX.XX] => {"changed": true, "cmd": ["/usr/local/bin/docker-compose", "-f", "/containers/docker-compose/docker-compose-booking-pre-eng-811.yml", "--project-name", "booking-eng-811", "--verbose", "up", "-d"], "delta": "0:00:00.355991", "end": "2016-06-15 12:02:11.623256", "rc": 255, "start": "2016-06-15 12:02:11.267265", "warnings": []}
stderr: compose.config.config.find: Using configuration files: /containers/docker-compose/docker-compose-booking-pre-eng-811.yml
docker.auth.auth.load_config: Found 'auths' section
docker.auth.auth.parse_auth: Found entry (registry=u'my-private-registry', username=u'redacted-username')
compose.cli.command.get_client: docker-compose version 1.7.1, build 0a9ab35
docker-py version: 1.8.1
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
compose.cli.command.get_client: Docker base_url: http://127.0.0.1:4000
compose.cli.command.get_client: Docker version: KernelVersion=3.10.0-327.18.2.el7.x86_64, Os=linux, BuildTime=Fri May 27 17:25:03 UTC 2016, ApiVersion=1.22, Version=swarm/1.2.3, GitCommit=eaa53c7, Arch=amd64, GoVersion=go1.5.4
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('back')
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {u'Containers': {u'0f4c1b89e2ae9476a53f07552f678d2914bb391d1d80ab051f74925eb9fbf65a': {u'EndpointID': u'5f07ba0940ffcb4b0c2f0acf5424b6976b28bd8344a56b0464ab6517da884bc8',
u'IPv4Address': u'10.0.0.3/24',
u'IPv6Address': u'',
u'MacAddress': u'02:42:0a:00:00:03',
u'Name': u'registrator_registrator_1'},
u'782c1d07d51f6871400da38e8840e81e9300f54a195b9e6ff2e931b23274655a': {u'EndpointID': u'c8654b5b73eaca7f630d6e2c4c898122a3ae6a86bd0cfab68a8654414fe4821a',
u'IPv4Address': u'10.0.0.2/24',
u'IPv6Address': u'',
u'MacAddress': u'02:42:0a:00:00:02',
u'Name': u'stdb1'},
...
compose.network.ensure: Network back declared as external. No new network will be created.
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=redis1', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=web', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=api_locations', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=booking', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('redis:2.8.21')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
u'Author': u'',
u'Comment': u'',
u'Config': {u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'redis-server'],
u'Domainname': u'',
u'Entrypoint': [u'/entrypoint.sh'],
u'Env': [u'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('my-private-registry/web:master')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
u'Author': u"Emmet O'Grady",
u'Comment': u'',
u'Config': {u'ArgsEscaped': True,
u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'/bin/sh', u'-c', u'/entrypoint.sh'],
u'Domainname': u'',
u'Entrypoint': None,
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('my-private-registry/api-locations:master')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
u'Author': u"Emmet O'Grady",
u'Comment': u'',
u'Config': {u'ArgsEscaped': True,
u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'/bin/sh', u'-c', u'/entrypoint.sh'],
u'Domainname': u'',
u'Entrypoint': None,
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('my-private-registry/booking:eng-811')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
u'Author': u'',
u'Comment': u'',
u'Config': {u'ArgsEscaped': True,
u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'/bin/sh', u'-c', u'/entrypoint.sh'],
u'Domainname': u'',
u'Entrypoint': None,
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=redis1', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.project._get_convergence_plans: web has upstream changes (redis1)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=web', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.project._get_convergence_plans: api_locations has upstream changes (redis1)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=api_locations', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.project._get_convergence_plans: booking has upstream changes (redis1)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=booking', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.parallel.feed_queue: Pending: set([<Service: web>, <Service: redis1>, <Service: api_locations>, <Service: booking>])
compose.parallel.feed_queue: Starting producer thread for <Service: redis1>
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('redis:2.8.21')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
u'Author': u'',
u'Comment': u'',
u'Config': {u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'redis-server'],
u'Domainname': u'',
u'Entrypoint': [u'/entrypoint.sh'],
u'Env': [u'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=bookingeng811', u'com.docker.compose.service=redis1', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('redis:2.8.21')
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {u'Architecture': u'amd64',
u'Author': u'',
u'Comment': u'',
u'Config': {u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'redis-server'],
u'Domainname': u'',
u'Entrypoint': [u'/entrypoint.sh'],
u'Env': [u'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
...
compose.service.build_container_labels: Added config hash: ae3be0880fdcb78073a419c6102617b730bfb42171c8204bf51e5c36eb8a85f3
compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (memswap_limit=None, links=[], devices=None, pid_mode=None, log_config={'Type': u'', 'Config': {}}, cpu_quota=None, read_only=None, dns=None, volumes_from=[], port_bindings={}, security_opt=None, extra_hosts=None, cgroup_parent=None, network_mode='back', shm_size=None, tmpfs=None, cap_add=None, restart_policy={u'MaximumRetryCount': 0, u'Name': u'always'}, dns_search=None, privileged=False, binds=[], ipc_mode=None, mem_limit='64M', cap_drop=None, ulimits=None)
compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [],
'Links': [],
'LogConfig': {'Config': {}, 'Type': u''},
'Memory': 67108864L,
'NetworkMode': 'back',
'PortBindings': {},
'RestartPolicy': {u'MaximumRetryCount': 0, u'Name': u'always'},
'VolumesFrom': []}
compose.service.create_container: Creating bookingeng811_redis1_1
compose.cli.verbose_proxy.proxy_callable: docker create_container <- (name=u'bookingeng811_redis1_1', image='redis:2.8.21', labels={u'com.docker.compose.service': u'redis1', u'com.docker.compose.project': u'bookingeng811', u'com.docker.compose.config-hash': 'ae3be0880fdcb78073a419c6102617b730bfb42171c8204bf51e5c36eb8a85f3', u'com.docker.compose.version': u'1.7.1', u'com.docker.compose.oneoff': u'False', u'com.docker.compose.container-number': '1'}, host_config={'NetworkMode': 'back', 'Links': [], 'PortBindings': {}, 'Binds': [], 'RestartPolicy': {u'MaximumRetryCount': 0, u'Name': u'always'}, 'Memory': 67108864L, 'LogConfig': {'Type': u'', 'Config': {}}, 'VolumesFrom': []}, environment=[], volumes={}, detach=True, networking_config={u'EndpointsConfig': {'back': {u'IPAMConfig': {}, u'Aliases': ['redis1']}}})
compose.parallel.parallel_execute_iter: Failed: <Service: redis1>
compose.parallel.feed_queue: Pending: set([<Service: booking>, <Service: api_locations>, <Service: web>])
compose.parallel.feed_queue: <Service: booking> has upstream errors - not processing
compose.parallel.feed_queue: <Service: api_locations> has upstream errors - not processing
compose.parallel.feed_queue: <Service: web> has upstream errors - not processing
compose.parallel.parallel_execute_iter: Failed: <Service: booking>
compose.parallel.feed_queue: Pending: set([])
compose.parallel.parallel_execute_iter: Failed: <Service: api_locations>
compose.parallel.feed_queue: Pending: set([])
compose.parallel.parallel_execute_iter: Failed: <Service: web>
compose.parallel.feed_queue: Pending: set([])
ERROR: for redis1 Error response from daemon: Conflict. The name "/bookingeng811_redis1_1" is already in use by container 5ecf77fc7bbad0548cf34c891ac4d043b2692816b63ed97744924bc1296b8e65. You have to remove (or rename) that container to be able to reuse that name.
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1
I've tried removing the container called "bookingeng811_redis1_1" manually but it doesn't exist anywhere.
Have the same problem there.
I frequently repeat the cycle :
At some point (2 - 3 days) it stops working:
docker: Error response from daemon: Conflict. The name "%name%" is already in use by container %container_id%. You have to remove (or rename) that container to be able to reuse that name..
When I try to remove the container %container_id% manually it says:
Failed to remove container (%container_id%): Error response from daemon: No such container: %container_id%
The container %container_id% not in the list docker ps -a and not in the folder /var/lib/docker/containers
Maybe the root of the problem is removing container with -f parameter? so docker doesn't clean up correctly and docker daemon thinks that the container is still there.
Docker version output:
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 8acee1b
Built:
OS/Arch: linux/amd64Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 8acee1b
Built:
OS/Arch: linux/amd64
Docker info output:
Containers: 27
Running: 13
Paused: 0
Stopped: 14
Images: 1512
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-8:9-521647-pool
Pool Blocksize: 65.54 kB
Base Device Size: 107.4 GB
Backing Filesystem: xfs
Data file: /dev/loop2
Metadata file: /dev/loop3
Data Space Used: 53.62 GB
Data Space Total: 107.4 GB
Data Space Available: 53.76 GB
Metadata Space Used: 129.9 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.018 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use--storage-opt dm.thinpooldev
or use--storage-opt dm.no_warn_on_loop_devices=true
to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 4.5.0-coreos-r1
Operating System: CoreOS 1010.5.0 (MoreOS)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 11.74 GiB
Name: xx-slave
ID: LVGE:QBNA:DXFP:AWR7:NAVO:LQLR:7CGF:UDOF:CTES:VZQJ:SRZJ:JLKW
Docker use 'nameIndex' to save the reference to containers. From the description, it seems that the issue is because nameIndex
is out-of-sync with the removed containers. That is where error is returned.
We may be able to cleanup the out-of-sync nameIndex to temporarily address the issue. Though docker use several indices (e.g., linkIndex) in addition to the nameIndex
so there might be several places that needs cleanup. Finding where the out-of-sync happens might be a better solution in the long run.
Is there any way to clean up out-of sync nameIndexes?
For now the only solution I have is to reboot the node which is not good. Rebooting docker daemon also not good.
For me what works is to stop docker daemon, remove everything from /var/lib/docker/*
and start docker again. It's a continuous integration server so I can handle not having any image loaded in the docker context, so that works for me, YMMV.
I am seeing same behaviour on 1.10.3
Containers: 105
Running: 75
Paused: 0
Stopped: 30
Images: 1434
Server Version: 1.10.3
Storage Driver: overlay
Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.5.0-coreos-r1
Operating System: CoreOS 1010.5.0 (MoreOS)
OSType: linux
Architecture: x86_64
We are seeing this problem every day on CoreOS and Docker 1.10.3:
# journalctl -fu docker
Aug 22 12:37:53 stateless-0.novalocal dockerd[8215]: time="2016-08-22T12:37:53.857617384+10:00" level=error msg="Handler for POST /v1.22/containers/create returned error: Conflict. The name \"/bridge-clockwork\" is already in use by container a9710d980f2935638df62e67175e28078753818a8b7e1e20bd2840d738dd58c0. You have to remove (or rename) that container to be able to reuse that name."
# docker inspect a9710d980f2935638df62e67175e28078753818a8b7e1e20bd2840d738dd58c0
Error: No such image or container: a9710d980f2935638df62e67175e28078753818a8b7e1e20bd2840d738dd58c0
# docker rm -f a9710d980f2935638df62e67175e28078753818a8b7e1e20bd2840d738dd58c0
Failed to remove container (a9710d980f2935638df62e67175e28078753818a8b7e1e20bd2840d738dd58c0): Error response from daemon: No such container: a9710d980f2935638df62e67175e28078753818a8b7e1e20bd2840d738dd58c0
in 50% of all cases, restarting docker daemon fixes the issue. In the other cases, we have to rm -rf /var/lib/docker. Both workarounds are disruptive to the production workload.
@cdwertmann If you have to rm -rf /var/lib/docker
, then that means a container exists with that name and it's getting reloaded after the daemon restarts. If you are getting the same errors when trying to remove these containers then it'd be extremely helpful to see what's in /var/lib/docker/containers/<id>
@cpuguy83 Here's what's inside the container directory:
# ls /var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/ -lah
total 184K
drwx------. 3 root root 4.0K Aug 20 23:14 .
drwx------. 16 root root 4.0K Aug 23 14:41 ..
-rw-r-----. 1 root root 102K Aug 23 14:39 69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a-json.log
-rw-r--r--. 1 root root 2.9K Aug 23 14:41 config.v2.json
-rw-r--r--. 1 root root 975 Aug 23 14:41 hostconfig.json
-rw-r--r--. 1 root root 17 Aug 20 23:14 hostname
-rw-r--r--. 1 root root 185 Aug 20 23:14 hosts
-rw-r--r--. 1 root root 45 Aug 20 23:14 resolv.conf
-rw-r--r--. 1 root root 71 Aug 20 23:14 resolv.conf.hash
drwx------. 2 root root 4.0K Aug 20 23:14 shm
In config.v2.json I can see "RemovalInProgress":true
:
# cat /var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/config.v2.json
{"State":{"Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"RemovalInProgress":true,"Dead":true,"Pid":0,"ExitCode":2,"Error":"","StartedAt":"2016-08-20T13:14:17.864964407Z","FinishedAt":"2016-08-23T04:41:29.775183062Z"},"ID":"69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a","Created":"2016-08-20T13:13:58.579971761Z","Path":"/bin/registrator","Args":["-ip","172.16.0.102","-resync","300","consul://172.16.0.102:8500"],"Config":{"Hostname":"sphinx","Domainname":"novalocal","User":"","AttachStdin":false,"AttachStdout":true,"AttachStderr":true,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["-ip","172.16.0.102","-resync","300","consul://172.16.0.102:8500"],"Image":"registry/registrator","Volumes":null,"WorkingDir":"","Entrypoint":["/bin/registrator"],"OnBuild":null,"Labels":{},"StopSignal":"SIGTERM"},"Image":"sha256:3b59190c6c800907d7a62c245bf93888db802b00407002fff7e08fed24e5557e","NetworkSettings":{"Bridge":"","SandboxID":"7713b13649c7964520180342f99914dd4720833ed39a51793ed483c356e0bd85","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Networks":{"bridge":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"5c0baa715bb76ea2eb5a6a32deb36a8093391ba6c76e55f31768838560c10f22","EndpointID":"","Gateway":"","IPAddress":"","IPPrefixLen":0,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":""}},"Ports":null,"SandboxKey":"/var/run/docker/netns/7713b13649c7","SecondaryIPAddresses":null,"SecondaryIPv6Addresses":null,"IsAnonymousEndpoint":false},"LogPath":"/var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a-json.log","Name":"/registrator","Driver":"overlay","MountLabel":"system_u:object_r:svirt_lxc_file_t:s0:c631,c718","ProcessLabel":"system_u:system_r:svirt_lxc_net_t:s0:c631,c718","RestartCount":0,"HasBeenStartedBefore":true,"HasBeenManuallyStopped":false,"MountPoints":{"/etc/localtime":{"Source":"/etc/localtime","Destination":"/etc/localtime","RW":false,"Name":"","Driver":"","Relabel":"ro","Propagation":"rprivate","Named":false},"/tmp/docker.sock":{"Source":"/var/run/docker.sock","Destination":"/tmp/docker.sock","RW":true,"Name":"","Driver":"","Relabel":"","Propagation":"rprivate","Named":false}},"AppArmorProfile":"","HostnamePath":"/var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/hostname","HostsPath":"/var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/hosts","ShmPath":"/var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/shm","ResolvConfPath":"/var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/resolv.conf","SeccompProfile":""}
After manually deleting /var/lib/docker/containers/69d00206523a0a6a996c27d6364ec13cca7c8c1d6e615e41d9da6c675abc717a/
and restarting the docker daemon, the conflict was resolved.
Seeing the same here:
docker -v
Docker version 1.10.3, build 3cd164c
docker-compose -v
docker-compose version 1.8.0, build f3628c7
cat /etc/os-release
NAME=CoreOS
ID=coreos
VERSION=1068.10.0
VERSION_ID=1068.10.0
BUILD_ID=2016-08-23-0220
PRETTY_NAME="CoreOS 1068.10.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"
And this is how I start/stop/restart my containers:
cat /etc/systemd/system/u\@.service
[Unit]
Description=%p-%i
# Requirements
Requires=docker.service
# Dependency ordering
After=docker.service
[Service]
Restart=always
RestartSec=10
TimeoutStartSec=60
TimeoutStopSec=15
EnvironmentFile=-/data/domains/%i/env
WorkingDirectory=/data/domains/%i/
ExecStartPre=-/opt/bin/docker-compose rm -f
ExecStart=/bin/bash -euxc "VIRTUAL_HOST=%i /opt/bin/docker-compose up"
ExecStop=/opt/bin/docker-compose stop
[Install]
WantedBy=multi-user.target
I got the same error, and then, nothing under docker ps -a
, but there was a folder under /var/lib/docker/containers
with the container hash, I removed, still no luck. I restarted docker daemon, it worked.
This workaround for https://github.com/docker/compose/issues/3277#issuecomment-238080180 also fix this issue...
@marcelmfs not for me. I have to delete the entire /var/lib/docker
Weird, for me it just worked. I'll try one more time to be sure.
@marcelmfs so you just deleted docker/network/files/local-kv.db?
Not only that, also removed all running containers docker rm -f $(docker ps -aq)
, and maybe all networks, since it's removing also the network/files/local-kv.db
.
I have not seen this issue since upgrading to docker 1.12
Is anybody else still seeing this with 1.12.x?
I still need to upgrade to check... I'll allocate a window for upgrade tomorrow.
Our CI server is upgraded and we removed the workaround that was deleting the local-kv.db
file. Next week I'll have more news on this.
Same here: had the issue in 1.11.x but not anymore since 1.12.x
Yeah, noticed no one was complaining about this in 1.12.
Wonder what we changed, I'm certain nothing directly related to naming.
tl;dr: all versions >= 1.10.0 are affected but in >= 1.12.0 it's much less likely to happen.
I traced this issue in the code and it can definitely happen on all versions >=1.10.0 which is where the nameIndex
structure was introduced. As @yongtang mentioned, this structure becomes out of sync with the removed containers.
The error happens whenever nameIndex
becomes out of sync with daemon.containers
.
The problem lies in the Daemon.create() function. nameIndex
is updated in line 64 by daemon.newContainer()
but daemon.containers
is updated much later in line 149 by daemon.Register()
.
If anything fails between these two, docker is in an inconsistent state. Before commit https://github.com/docker/docker/commit/114be249f022535f0800bd45987c4e9cd1b321a4 (landed in 1.12.0), that was all that was needed to trigger the issue. That commit changed the cleanup function from docker.ContainerRm
, which never works in this case because it needs the container to be registered, to docker.cleanupContainer
.
However, docker.cleanupContainer
can fail before it manages to cleanup. It only deletes entries from the nameIndex
at line 113 but there are plenty of things that can go wrong before that.
All of the above explain the case where a simple daemon restart fixes the issue because nameIndex
is not persisted on disk. I've banged my head against the code to try and figure out how this bug could survive restarts but I can't see how. We've definitely seen it in production though so currently I'm waiting for it to happen again and try to investigate further.
I fixed the in-memory version of the issue in #27956
This issue just popped up for me before updating to the latest (1.12.3), I uninstalled docker and reinstalled, and am unfortunately still seeing it.
Output of docker version
:
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: windows/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: linux/amd64
Output of docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.12.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 11
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.27-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.919 GiB
Name: moby
ID: XZHZ:262M:ENKG:Z62J:U4OX:FVKN:CGZW:7OCZ:IU5R:D7OM:F3MT:K3ND
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 12
Goroutines: 22
System Time: 2016-11-09T01:01:32.4577814Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
Insecure Registries:
127.0.0.0/8
My workflow is a bit different than what has been mentioned in this thread, but it is similar in that I am doing lots of setup and teardown of containers in my testing suite. It might also be of interest that this is being done through requests to the Remote API.
I'm a bit at a loss as to how to proceed. If requested, I can certainly prepare a test case of my issue, but as of now it is part of a larger project at work so I'll need to cut things down.
Do y'all have any suggestions?
@davidglivar You restarted the daemon and are still seeing the error?
@cpuguy83 if by restarting the daemon, you mean stopping/starting the docker for windows app, yes. I have also reinstalled docker, as well as doing a 'factory' reset. I have not touched Hyper-V as I am not confident in its inner workings.
@davidglivar So you are seeing this:
?
@cpuguy83 yep! I just went through that sequence a couple of times to be sure.
@davidglivar Can you docker ps -a
and see if you see the container there?
@cpuguy83 docker ps -a
yields no containers. I would say its because of my test teardown and prep, but even when catching the error in my tests, and immediately creating a child process of docker ps -a
the result is the same.
Just to follow up on the previous day's comments: I still encountered the 409 error in the context of my application; however, a test script (here) has yet to display any problem.
I created a reliable way of reproducing this. You can use the following python script to make any container name conflict:
# pip install docker-py
from docker import Client
NAME = 'foobar'
cli = Client(version='auto')
# Create an invalid security option that will cause an error in
# https://github.com/docker/docker/blob/v1.10.3/daemon/create.go#L82
host_config = cli.create_host_config(security_opt=['invalid_opt'])
# After this, NAME will always conflict until the daemon gets restarted
try:
cli.create_container(name=NAME, host_config=host_config, image='', command='/')
except:
pass
This problem can also be triggered in one of the following conditions which explain some of the cases where wiping of /var/lib/docker
was needed:
/var/lib/docker
is out of inodes/var/lib/docker
is out of space/var/lib/docker/<storage-driver>
is read-onlyThe fix is to update to docker >=1.12.0
Sorry for a late come back on this issue.
So far, since removing the workaround, our CI server did not suffer from this problem any longer.
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
Also experiencing this with:
CentOS 7.2
Docker 1.12.1
There's no folder with the specified hash under /var/lib/docker/containers
, and restarting the daemon had no effect.
@orodbhen If restarting the daemon didn't work, there there must be a container loaded with that name.
Can you check docker ps -a
?
@cpuguy83 No, there's no container with that name.
I actually think this may be an issue with docker-py
. I wonder how many people here are using it. It appears that @petrosagg is.
It happens when calling create_container()
even if the offending container name isn't used. But I have no issue with the docker shell command, using docker create
or docker run
.
Strange, though, because it seems to be printing the error message produced by the daemon.
@petrosagg do you have the same problem using the the docker shell command instead of docker-py?
@orodbhen Are you sure your docker-py instance is talking to the same daemon as the CLI?
There's only one daemon running: Both using /var/run/docker.sock
.
I've created an issue for docker-py. But I'm not convinced yet that there's not some underlying issue with docker causing the problem.
@orodbhen When you restart the daemon, can you grab the logs from the loading sequence (specifically loading containers)?
This can't be a ref-counting issue if you've restarted the daemon. The name registrar is held only in memory and is rebuilt on daemon restart.
Sorry, please disregard. It was a problem with the way I was logging errors that made it seem like the error was reoccurring.
@orodbhen I'm not using docker-py, I only used it to create a small reproducible testcase. The reason it doesn't happen with the docker CLI is because the client sanitises the input before passing it to the server, but I wanted to have direct access to the server and cause the critical section to fail.
delete the service running in background.
docker service rm service_name
then ckeck docker info it shows containers:0
removed, reposted on #3277
I was also facing same issue with following errors:
x Start Mongo: FAILED
-----------------------------------STDERR-----------------------------------
Error response from daemon: Cannot update container 78dc6f6a43d0e6cfb7aa6bba2f0a377bd39620bff79ca308540a13ddd4e62886: container is marked for removal and cannot be "update"
Error response from daemon: removal of container mongodb is already in progress
docker: Error response from daemon: Conflict. The container name "/mongodb" is already in use by container "78dc6f6a43d0e6cfb7aa6bba2f0a377bd39620bff79ca308540a13ddd4e62886". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
-----------------------------------STDOUT-----------------------------------
3.4.1: Pulling from library/mongo
Digest: sha256:aff0c497cff4f116583b99b21775a8844a17bcf5c69f7f3f6028013bf0d6c00c
Status: Image is up to date for mongo:3.4.1
no such container
Running mongo:3.4.1
I just ran the command : sudo service docker restart
And everything is working fine now.
I was also facing this issue with the following errors:
docker-compose up -d --no-build api
Creating api ...
Creating api ... error
ERROR: for api Cannot create container for service api: Conflict. The name "/api" is already in use by container 2788cdc091645f0dcef417f189f9c80fddd3f6f99eaba3771d0f4a87e2295841. You have to remove (or rename) that container to be able to reuse that name.
ERROR: for api Cannot create container for service api: Conflict. The name "/api" is already in use by container 2788cdc091645f0dcef417f189f9c80fddd3f6f99eaba3771d0f4a87e2295841. You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.
It turns out the directory where the compose file is located got renamed from the time the existing container was run and when I tried to rerun the container. I checked by running the following:
docker inspect api | grep -i compose
"com.docker.compose.config-hash": "c0e3e88ad502faf806288e16419dc52b113cae18abeac1769fa0e98a741de48a",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "api",
"com.docker.compose.service": "api",
"com.docker.compose.version": "1.14.0"
I noticed project label was set to api
but the current directory where I ran this was actually api.git
so it seems that it got renamed sometime between my last run and now. I simply renamed the directory back to api
, brought the container up again (without removing the existing container or restarting docker) and everything is working as expected.
We have many containers running so restarting docker was not an optimal solution.
docker container prune
to delete stopped containers.
I had to force remove the container docker rm -f /<container_name>
Most helpful comment
I have a helper function to nuke everything so that our Continuous blah, cycle can be tested, erm... continuously. Basically it boils down to the following:
To clear containers:
To clear images:
To clear volumes:
To clear networks: