I've been trying out sanic on my local machine (OS X) and a server (ubuntu). Haven't been able to replicate the benchmarks shared in the docs. Here are my findings. Any idea what I can do to improve the performance of sanic?
Benchmarking done with Apache Benchmark with 25 concurrent requests for 5000 requests
ab -p json.txt -T application/json -c 25 -n 5000 http://127.0.0.1:6687/test_api
Benchmarks (local machine, OS X)
Flask (single thread)
Time per request: 75.083 [ms] (mean)
Time per request: 3.003 [ms] (mean, across all concurrent requests)
Sanic (single thread)
Time per request: 76.736 [ms] (mean)
Time per request: 3.069 [ms] (mean, across all concurrent requests)
Flask + uWSGI (4 workers, 2 threads each)
Time per request: 69.077 [ms] (mean)
Time per request: 2.763 [ms] (mean, across all concurrent requests)
Sanic (4 workers; not sure what's going on here)
Time per request: 291.198 [ms] (mean)
Time per request: 11.648 [ms] (mean, across all concurrent requests)
Benchmarks (api server, ubuntu)
Flask (single thread)
Time per request: 105.088 [ms] (mean)
Time per request: 4.204 [ms] (mean, across all concurrent requests)
Sanic (single worker)
Time per request: 117.831 [ms] (mean)
Time per request: 4.713 [ms] (mean, across all concurrent requests)
Flask + uWSGI (4 workers, 2 threads each)
Time per request: 30.998 [ms] (mean)
Time per request: 1.240 [ms] (mean, across all concurrent requests)
Sanic (4 workers)
Unable to start sanic with 4 workers on ubuntu due to issue reported here: https://github.com/channelcat/sanic/issues/67
Can you share the files you're using for testing? Something like a repository with directories for each implementation would be ideal.
I'm afraid I can't (for legal reasons).
To share some detail, we use Flask/sanic as a framework for machine learning APIs. APIs takes in json, input into machine learning model, get results, and returns json.
It all depends on what you're doing within your routes,
Sanic should be faster in most cases but different implementation details (like blocking calls within your routes) can have a big impact on performance.
Here's the routes I used. It's pretty basic and similar to the benchmark approach.
@app.route('/categorize', methods=['POST'])
async def categorize(request):
logger.info('Json received: {}'.format(request.json))
_title = request.json['title'].encode('utf-8') # encode to utf 8
logger.debug('_title from json: {}; type({})'.format(_title, type(_title)))
result = categorize_title(_title)
return jsonify(result)
@eugeneyan regarding the multiple workers test, are you using the current branch? There was an implementation issue with multiple workers that was fixed recently, and is probably not pushed to pypy yet.
result = categorize_title(_title)
You may benchmark this part alone. I have a feeling you're running some blocking code which is the actual bottleneck here.
I'd agree, replicating the benchmarks (Which are supposed to be ideal conditions for all of the frameworks tested) is difficult and is really up to your implementation. I'm going to close this for now.
Most helpful comment
You may benchmark this part alone. I have a feeling you're running some blocking code which is the actual bottleneck here.