Describe the bug
The documentation says that error handlers can be asynchronous.
However, asynchronous error handlers do not seem to be supported everywhere.
Code snippet
@app.exception(Exception)
async def handle_base_exception(_request: Request, exception: Exception):
return response.json("too bad")
Expected behavior
Then, for example when I exceed the REQUEST_MAX_SIZE limit, I would expect to receive "too bad" as a response.
Unfortunately, sanic contains code like https://github.com/huge-success/sanic/blob/91f6abaa81248189fbcbdf685e8bdcbb7846609f/sanic/server.py#L577-L579
which expects the error handler to be synchronous.
Actual behavior
Here are the (verbose) logs of the actual results:
2020-02-17 19:03:15,514:INFO:
[2020-02-17 19:03:24 +0100] [37783] [ERROR] Transport closed @ ('127.0.0.1', 58698) and exception experienced during error handling
[2020-02-17 19:03:24 +0100] - (sanic.access)[INFO][UNKNOWN]: nil 0 -1
2020-02-17 19:03:24,145:ERROR:Transport closed @ ('127.0.0.1', 58698) and exception experienced during error handling
2020-02-17 19:03:24,145:INFO:
.../venv/lib/python3.7/site-packages/sanic/server.py:297: RuntimeWarning: coroutine 'handle_base_exception' was never awaited
self.write_error(PayloadTooLarge("Payload Too Large"))
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
This one seems to be hard to fix because of how asyncio protocols work. The server.write_error function is non-async and thus cannot await on the response, and the function is also called from non-async callbacks that cannot be made async because of how asyncio is built.
I suppose this could be worked around by using asyncio.create_task(...) but doing so might introduce some unexpected effects and since the protocol code is in any case quite convoluted, I am not quite confident enough to just go changing it.
Anyone willing to weigh in, or should I just modify the code and see if tests break?
I've confirmed that this is now fixed in the streaming branch.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is incorrect, please respond with an update. Thank you for your contributions.
This should probably not be closed until the fix is merged
+1
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is incorrect, please respond with an update. Thank you for your contributions.
No, this issue is not stale, it still hasn't been fixed
Most helpful comment
I've confirmed that this is now fixed in the streaming branch.