I just implemented a watchdog utility for our aiohttp servers which tracks when the main thread hangs for more than 5 seconds, and it caught the below callstack. This response returned a large json object with compression enabled. Based on the callstack I think probably compression should be done in a background thread otherwise the web-server can get blocked for several seconds and cause the health checks to fail (note in AWS the default is 5s so if any operation takes more than 5s your server may get recycled).
| Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/fbn/fbn.com/api/weather/services/weatherapi/service.py", line聽1430聽in <module>
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/fbn/fbn.com/api/weather/services/weatherapi/service.py", line聽1426聽in start_app
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/fbn_async_server/entrypoint.py", line聽237聽in run_server
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/asyncio/base_events.py", line聽422聽in run_forever
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/asyncio/base_events.py", line聽1434聽in _run_once
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/asyncio/events.py", line聽145聽in _run
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/fbn_async_server/entrypoint.py", line聽44聽in start
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_protocol.py", line聽398聽in start
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_response.py", line聽300聽in prepare
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_response.py", line聽605聽in _start
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_response.py", line聽329聽in _start
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_response.py", line聽290聽in _start_compression
聽 | Aug 18 00:08:25.467 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_response.py", line聽616聽in _do_start_compression
probably use run_in_executor?
here's another problem:
聽 | Aug 18 00:08:13.717 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/fbn/fbn.com/api/weather/services/weatherapi/service.py", line聽734聽in get_weather_for_locations
聽 | Aug 18 00:08:13.717 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/fbn/fbn.com/api/weather/python/fbn/asyncio/aiohttp_utils.py", line聽25聽in json_response
聽 | Aug 18 00:08:13.717 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/aiohttp/web_response.py", line聽632聽in json_response
聽 | Aug 18 00:08:13.717 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/simplejson/__init__.py", line聽399聽in dumps
聽 | Aug 18 00:08:13.717 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/simplejson/encoder.py", line聽296聽in encode
聽 | Aug 18 00:08:13.717 | 聽 | 聽 | WeatherAPIInternalService | File聽"/usr/local/lib/python3.6/site-packages/simplejson/encoder.py", line聽378聽in iterencode
another place with the dumps method can be run in an executor, or perhaps support an async method so caller can run in executor.
Thanks for the report.
I expected something like this from the very beginning.
Switching all compression/encoding to thread pool can slowdown simple cases.
As an option, we can use an executor for large data and switch back to direct call for decompressing, say, 1kb buffer.
I'm going to code something up, will submit pr since this is high priority for us
ok, have a first stab at this, let me know what you guys think
Most helpful comment
Thanks for the report.
I expected something like this from the very beginning.
Switching all compression/encoding to thread pool can slowdown simple cases.
As an option, we can use an executor for large data and switch back to direct call for decompressing, say, 1kb buffer.