I am facing an issue with some of the requests. It's returning an error when we get quite a few simultaneous requests. The error description is as below:
web_1 | 2017-10-03 09:24:49 - (network)[INFO][172.17.0.1:55372]: GET http://localhost:8000/api/order_items/123456 200 38
web_1 | 2017-10-03 09:24:50 - (network)[INFO][172.17.0.1:55382]: GET http://localhost:8000/api/order_items/123456 200 38
web_1 | 2017-10-03 09:24:55 - (network)[INFO][172.17.0.1:55392]: GET http://localhost:8000/api/order_items/123456 200 38
web_1 | 2017-10-03 09:24:56 - (sanic)[ERROR]: Connection lost before response 2343 written @ ('172.17.0.1', 55402)
web_1 | 2017-10-03 09:24:56 - (network)[INFO][172.17.0.1:55412]: GET http://localhost:8000/api/order_items/123456 200 38
web_1 | 2017-10-03 09:24:57 - (sanic)[ERROR]: Connection lost before response 2343 written @ ('172.17.0.1', 55424)
web_1 | 2017-10-03 09:24:57 - (network)[INFO][172.17.0.1:55430]: GET http://localhost:8000/api/order_items/123456 200 38
So as per my understanding, HTTP connection is closing before Sanic can write to it which is fine but I should be able to override the behaviour and hide the error if I wish to which is something I need help with
Are you using sanic@master, or 0.6.0 release?
I ask, because there was recently some changes to the TCP backend server connection handling logic, and this may be a regression.
Otherwise, if you are on 0.6.0, perhaps you could try sanic@master to see if the behavior is better in that.
I have the same issue on sanic 0.6.0 pypi version, and on sanic@master git version too
2017-10-06 07:31:07 - (sanic)[ERROR]: Connection lost before response written @ ('172.21.0.1', 51732)
2017-10-06 07:31:07 - (sanic)[ERROR]: Connection lost before response written @ ('172.21.0.1', 51730)
2017-10-06 07:31:07 - (sanic)[ERROR]: Connection lost before response written @ ('172.21.0.1', 51728)
2017-10-06 07:31:07 - (sanic)[ERROR]: Connection lost before response written @ ('172.21.0.1', 51726)
2017-10-06 07:31:07 - (sanic)[ERROR]: Connection lost before response written @ ('172.21.0.1', 51724)
Can you post some snippets of your code or how can we repro
Simplest example to reproduce this issue
# app.py
import asyncio
from sanic import Sanic
from sanic.response import text
app = Sanic()
app.config.KEEP_ALIVE = False
@app.route("/")
async def test(request):
await asyncio.sleep(1) # imitate slow response
return text('OK')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
Run app and refresh localhost:8000/ two times.
This error occurred while first page view not receive response, but second request already was send.
$ python3.6 app.py
[2017-10-06 09:02:02 +0000] [7] [INFO] Goin' Fast @ http://0.0.0.0:8000
[2017-10-06 09:02:02 +0000] [7] [INFO] Starting worker [7]
[2017-10-06 09:02:05 +0000] [7] [ERROR] Connection lost before response written @ ('172.21.0.1', 47000)
[2017-10-06 09:02:05 +0000] - (sanic.access)[INFO][172.21.0.1:47004]: GET http://localhost:8000/ 200 2
My situation is very similar to @samael500. I have tried different ways to see when exactly the issue happens. I am using aiomysql and the issue happens for requests which are retrieving data from database using aiomysql. I have tried to make requests sleep using time.sleep(n) but that was not causing the issue.
If you need to see my implementation as well, I can provide a working example to reproduce it.
If use time.sleep blocking call - both request are completed well (issue not reproduce). But on non blocking call await asyncio.sleep there error is occur.
In my case I use non blocking call to aioredis and asyncpg.
So can we expect a solution on this? For now, we are getting this pretty frequently and these are reported in our logging system (Sentry) and I can't find a solution at the moment.
@aqeelahmad https://github.com/channelcat/sanic/pull/965
@aqeelahmad my temporary dirty hack solution for not send this error to Sentry - just add log filter.
class NoConnectionLostFilter(logging.Filter):
""" while issue not resolved https://github.com/channelcat/sanic/issues/959 """
def filter(record):
return not record.getMessage().startswith('Connection lost before response written')
logging.getLogger('root').addFilter(NoConnectionLostFilter)
untill #965 was not merged
@samael500 thanks for the hack. It will do for now. However, I have subscribed to your pull and will see when that gets merged.
Should sanic also cancel the handler in such cases?
If the handler makes many expensive external requests, it's wasting resource for the handler to fire remaining external requests after connection is closed.
This exception thrown only during we try to write a response. So even if client disconnected before handler process all subrequest the handler didn't know about disconnected before try to write response. So handler will canceled only when finish all work. Sometimes it was not necessary to waste resources.
And on the other hand this is good. Because provide a few fault tolerance request processing. Even client disconnected -- handler make all internal subrequests and keep consistency.
In my case on each request handler visit few external api's services and read some data from database and in finally write extended access log with this data before write response. And is important to log this data even if client will be disconnected.
If you have concern on consistency, I think it's better to use asyncio.shield to protect those critical operations. ~The handler can be canceled by connection timeout~
UPDATE: I checked the code in master and handler will not be canceled by timeout once it's called. That looks much better!
Most helpful comment
@aqeelahmad my temporary dirty hack solution for not send this error to Sentry - just add log filter.
untill #965 was not merged