I get this error message intermittently:
ERROR: for testdb-data UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
We run around 20 testing jobs that execute docker-compose up
in around 14 Jenkins agents. There is a weak correlation between running many jobs at the same time and getting this error.
I have the output from docker-compose --verbose up
but cannot apart the relevant parts of it yet. Some potentially useful excepts:
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=rosetta', u'com.docker.compose.service=testdb-data', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
...
compose.project._get_convergence_plans: other-container has upstream changes (testdb-data, some-container)
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=some-project', u'com.docker.compose.service=other-container', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items)
...
(more similar lines)
...
compose.parallel.feed_queue: Starting producer thread for <Service: testdb-data>
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=rosetta', u'com.docker.compose.service=testdb-data', u'com.docker.compose.oneoff=False']})
...
compose.service.create_container: Creating testdb-data
compose.cli.verbose_proxy.proxy_callable: docker create_container <- (name='testdb-data', image='docker-registry.bln.int.planetromeo.com:5000/pr-testdb:master_master', labels={u'com.docker.compose.service': u'testdb-data', u'com.docker.compose.project': u'rosetta', u'com.docker.compose.config-hash': 'c336deb9e460cd8f979029d54975bc936fee5ff573d9698c65ca479c6a7ed507', u'com.docker.compose.version': u'1.10.1', u'com.docker.compose.oneoff': u'False', u'com.docker.compose.container-number': '1'}, host_config={'NetworkMode': u'rosetta_default', 'Links': [], u'Isolation': None, 'PortBindings': {}, 'Binds': [], 'LogConfig': {'Type': u'', 'Config': {}}, 'VolumesFrom': []}, environment=[], entrypoint=['tail', '-f', '/dev/null'], volumes={u'/var/lib/mysql': {}, u'/var/www/dynamic/caches': {}, u'/var/www/pics': {}, u'/data/elastic-profilesearch': {}, u'/var/www/files/lib/_test': {}, u'/data/elastic-activitystream': {}, u'/var/www/dynamic/world': {}}, detach=True, networking_config={u'EndpointsConfig': {u'rosetta_default': {u'IPAMConfig': {}, u'Aliases': ['testdb-data']}}})
...
compose.parallel.parallel_execute_iter: Failed: <Service: testdb-data>
compose.parallel.feed_queue: Pending: set([<Service: service1>, <Service: service2>, <Service: service3>, ...)
compose.parallel.feed_queue: <Service: service1> has upstream errors - not processing
compose.parallel.feed_queue: <Service: service2> has upstream errors - not processing
compose.parallel.feed_queue: <Service: service3> has upstream errors - not processing
...
Can I help providing more details?
Compose, like the Docker CLI, connects to the docker daemon using a unix socket. It's highly possible that at high load, the engine takes an overly long time to respond to requests, causing this error to appear. As the error message suggests, try playing around with the COMPOSE_HTTP_TIMEOUT
value and see whether stability improves as a result.
I get a similar error when running docker-compose up
:
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 804, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 757, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/vernica/.local/lib/python2.7/site-packages/compose/cli/log_printer.py", line 197, in watch_events
for event in event_stream:
File "/home/vernica/.local/lib/python2.7/site-packages/compose/project.py", line 356, in events
decode=True
File "/home/vernica/.local/lib/python2.7/site-packages/docker/api/client.py", line 290, in _stream_helper
for chunk in json_stream(self._stream_helper(response, False)):
File "/home/vernica/.local/lib/python2.7/site-packages/docker/utils/json_stream.py", line 66, in split_buffer
for data in stream_as_text(stream):
File "/home/vernica/.local/lib/python2.7/site-packages/docker/utils/json_stream.py", line 22, in stream_as_text
for data in stream:
File "/home/vernica/.local/lib/python2.7/site-packages/docker/api/client.py", line 296, in _stream_helper
data = reader.read(1)
File "/home/vernica/.local/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 324, in read
flush_decoder = True
File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/home/vernica/.local/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 237, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.
There is very little load on the docker daemon.
17.03.0-ce
1.11.2, build dfed245
I'm seeing this as well. I've tried increasing the COMPOSE_HTTP_TIMEOUT to 120 when building images and running containers. Where is that parameter being used? That is, where does it need to be specified? Build time or runtime? What to try next?
Is anyone looking at this? It seems numerous people are seeing it and the suggested workarounds are nominally effective (at best).
Docker version 17.03.1-ce, build c6d412e
docker-compose version 1.11.2, build dfed245
On our side, we resorted to not overloading our servers i.e. running less stuff at the same time鈥攆or us it made sense since we would rather have e.g. a few 5-minute jobs than many more 10-minute jobs (numbers made up).
You might consider running less stuff, or getting better hardware :)
(Unless indeed this is a docker bug)
On my (and my team's) side, I think I've no more to add. Unless other would like to add input here, shall I close this issue?
@Carracer66 COMPOSE_HTTP_TIMEOUT
is an environment variable, not a parameter e.g.
COMPOSE_HTTP_TIMEOUT=120 docker-compose up
I have tried using the COMPOSE_HTTP_TIMEOUT as suggested by @jleeothon to no avail. Also, I don't think my issue is related to overloading the server as I receive this exception even when starting only one container.
cat /proc/cpuinfo
...
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
stepping : 2
microcode : 0x35
cpu MHz : 2293.262
cache size : 35840 KB
...
cat /proc/meminfo
MemTotal: 8010824 kB
MemFree: 1305224 kB
MemAvailable: 5465344 kB
...
I'm not sure what else to try. Is there additional info I could provide????
+1 experiencing this issue too, even on low workloads when having multiple kvm vm with a single docker container inside
sudo service restart docker
@jinghuaz I appreciate the response but even for production-like environment this isn鈥榯 a solution to trigger every few mins a service restart.
I somehow think it has to do with high filesystem latency or low throughput because I experience it more often on system with slow filesystems.
This issue and error while pulling official images from dockerhub has practically made Docker for MacOS unusable. For me, even docker restart doesn't work.
We are seeing this as well on our packer deployment scenario:
https://github.com/docker/compose/issues/1045#issuecomment-374413927
We tried upping the timeout without success, if anything, this should fail faster :/
@brainstorm We "removed" that issue by a) keeping the count of docker run/docker-compose up(start) commands low (e.g. 1 at a time). This may be hard to orchestrate. Or b) making the filesystem much more faster, and therefore the boot time lower.
+1 am also facing this issue. And, either of the timeout or the restarting docker solution helped.
Does a final fix available for this?
i too have this issue
Any solution to this issue yet..... ?
Currently, am using the docker-compose command with setting large timeouts (10mins) and it is working for me. Below is the command which am using
DOCKER_CLIENT_TIMEOUT=600 COMPOSE_HTTP_TIMEOUT=600 docker-compose up
I'm getting:
ERROR: for X UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
a lot when running multiple single-run services (like 20) while docker-compose trying to creating/recreating them.
I believe the hardware is not the issue with 18 cores and 128GB of RAM and it's connecting to localhost. Maybe slow HDD?
Specs:
Operating System | Ubuntu 18.04.1 LTS 4.15.0-34-generic x86_64
Model | Punch technology 646-716-44
Processor | Intel Core i9-7980XE @ 4.40 GHz聽1 processor, 18 cores, 36 threads
Processor ID | GenuineIntel Family 6 Model 85 Stepping 4
L1 Instruction Cache | 32 KB x 18
L1 Data Cache | 32 KB x 18
L2 Cache | 1024 KB x 18
L3 Cache | 25344 KB
Motherboard | ASUSTeK COMPUTER INC. ROG RAMPAGE VI EXTREME
BIOS | Apple Inc. 251.0.0.0.0 | American Megatrends Inc. 1401
Memory | 16384 MB 2133 MHz LPDDR3 | 128606 MB
I've increased the timeouts with DOCKER_CLIENT_TIMEOUT=600 COMPOSE_HTTP_TIMEOUT=600
as suggested, and it seems to work.
```
Docker has it's own limits, if you go into Preferences > Advanced I had to turn Memory from 2GB to 6GB this solved my issue running 1 hub and 6 nodes. Some of use are getting timeout because the Hub or node is dying on you, increasing timeouts doesn't help. Others are probably getting the same issue and it can be fixed with an env variable situations.
We are seeing this error as well, but also are getting it along with this warning: WARNING: Connection pool is full, discarding connection: localhost
.
In our case, we're trying to use the --scale
option. What I believe is the problem is that Python has a default connection pool of 10 - which matches up with what we see when we try to launch >10 containers - only the first 10 actually get work done. See https://stackoverflow.com/a/55253192
I also found a workaround here, but it's less than ideal, and it seems to me that one ought to be able to set the connection pool size from docker-compose somehow.
UPDATE:
We were able to get more than 10 containers going, but had to set both the COMPOSE_TIMEOUT
and DOCKER_COMPOSE_TIMEOUT
variables to 360s. We'd still see warnings, but no longer had containers die.
Have you looked to Network Settings? Maybe you have to much rules in IP Tables and it block your request?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not recent activity during the stale period.
This error message will also be returned when com.docker.osxfs
is leaking memory, which happens when you symlink a mounted volume drive to a folder that does not exist.
yml error can make this :
logging:
options:
max-size: "10m"
max-file: 5
you will get :
ERROR: for xxx UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
fix it :
logging:
options:
max-size: "10m"
max-file: "5"
Docker has it's own limits, if you go into Preferences > Advanced I had to turn Memory from 2GB to 6GB this solved my issue running 1 hub and 6 nodes. Some of use are getting timeout because the Hub or node is dying on you, increasing timeouts doesn't help. Others are probably getting the same issue and it can be fixed with an env variable situations.
This solved my issue.
Most helpful comment
This issue and error while pulling official images from dockerhub has practically made Docker for MacOS unusable. For me, even docker restart doesn't work.