Sanic: Error reoccur - ERROR: Connection lost before response written @ ('127.0.0.1', 64484)

Created on 16 Feb 2017  路  12Comments  路  Source: sanic-org/sanic

I remembered this type of error once solve in versions before.

Now in the latest dev version, that error occurs again.

All 12 comments

@tonyliuatmobius can you please link to the older issue or provide a minimal example to reproduce this?

I think the issue that started this error was #307. It solved the stack overflow, but this error persisted.

It depends on what the user is trying to do, I'll leave this open for a couple more days but if we can't reproduce this somehow I will be closing this.

i run this example:
https://github.com/channelcat/sanic/blob/master/examples/sanic_asyncpg_example.py

use wrk to do performance testing:
wrk -c 100 --latency http://127.0.0.1:8000/

the wrk finish executing, sanic displays some error messages:
ERROR: Connection lost before response written @ ('127.0.0.1', 41750)
......
ERROR: Connection lost before response written @ ('127.0.0.1', 41770)

@wuxqing this example is actually very poor and it's my fault. I'll update it. First of all it should be using a connection instead of a pool. This is my updated version:

from sanic import Sanic
from sanic.response import json

DB_CONFIG = {
    'host': 'some-postgres',
    'user': 'postgres',
    'password': 'mysecretpassword',
    'port': '5432',
    'database': 'test'
}

def jsonify(records):
    """
    Parse asyncpg record response into JSON format
    """
    return [dict(r.items()) for r in records]

app = Sanic(__name__)

@app.listener('before_server_start')
async def create_db(app, loop):
    """
    Create some table and add some data
    """
    conn = await connect(**DB_CONFIG)
    await conn.execute('DROP TABLE IF EXISTS sanic_post')
    await conn.execute("""CREATE TABLE sanic_post (
        id serial primary key,
        content varchar(50),
        post_date timestamp);"""
                       )
    for i in range(0, 100):
        await conn.execute(f"""INSERT INTO sanic_post
                           (id, content, post_date) VALUES ({i}, {i}, now())""")


@app.route("/")
async def handler(request):
    conn = await connect(**DB_CONFIG)
    results = await conn.fetch('SELECT * FROM sanic_post')
    return json({'posts': jsonify(results)})


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8000)

However even running that yes I do get this error. I think it's asyncpg though as this is the full error I'm getting:

2017-03-08 03:27:13,530: ERROR: Traceback (most recent call last):
  File "/usr/src/app/sanic/app.py", line 440, in handle_request
    response = await response
  File "sanic_asyncpg_example.py", line 48, in handler
    conn = await connect(**DB_CONFIG)
  File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 593, in connect
    await connected
asyncpg.exceptions.TooManyConnectionsError: sorry, too many clients already
2017-03-08 03:27:13,531: ERROR: Connection lost before response written @ ('127.0.0.1', 45538)

@r0fls You should close the connection. asyncpg.exceptions.TooManyConnectionsError is PostgreSQL connection caused by too many

Ok I can see that now, but wrk doesn't show any 500 responses:

 ./wrk -c 100 --latency http://127.0.0.1:8000/
Running 10s test @ http://127.0.0.1:8000/
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   112.66ms   28.18ms 277.75ms   76.09%
    Req/Sec   443.68     70.66   565.00     77.50%
  Latency Distribution
     50%  107.79ms
     75%  124.67ms
     90%  148.76ms
     99%  207.66ms
  8864 requests in 10.05s, 41.46MB read
Requests/sec:    882.20
Transfer/sec:      4.13MB

Is it just closing the connection too early?

I found the problem with wrk, the parameter --threads must be the same as --connections, for example:
wrk -c100 -t100 --latency http://127.0.0.1:8000/

@r0fls my updated:

@app.listener('before_server_start')
async def create_pool(app, loop):
    # Create a database connection pool
    app.config['pool'] = await asyncpg.create_pool(**DB_CONFIG)

@app.route("/")
async def handler(request):
    pool = request.app.config['pool']
    async with pool.acquire() as conn:
        results = await conn.fetch('SELECT * FROM sanic_post')
    return json({'posts': jsonify(results)})

sanic is 4 workers

wrk -c 100 -t100 -d 10s --latency http://127.0.0.1:8000/
Running 10s test @ http://127.0.0.1:8000/
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.72ms    6.10ms  78.46ms   84.96%
    Req/Sec   123.60     60.03   610.00     66.14%
  Latency Distribution
     50%    6.77ms
     75%   12.13ms
     90%   15.62ms
     99%   31.82ms
  123565 requests in 10.10s, 634.10MB read
Requests/sec:  12232.92
Transfer/sec:     62.78MB

Seems like this is inactive. Will re-open if necessary.

I am facing this issue when multiple requests are coming in a short span, my server starts throwing this. I can provide the debugging information but let's please open this thread to resume discussion on this.

In my case, if I have a request which doesn't involve database connections, there is no issue. As soon as my server is hit for a database connection requests, I start getting (sanic)[ERROR]: Connection lost before response written @ ('172.17.0.1', 50752). I am using aiomysql to connect to the database.

I receive this error too when using aioodbc connection pool

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ubergarm picture ubergarm  路  4Comments

fiecato picture fiecato  路  3Comments

graingert picture graingert  路  3Comments

davidtgq picture davidtgq  路  3Comments

sirex picture sirex  路  4Comments