Aiohttp: Performance changes between 1.2.0, 1.3.1 and 2.0a

Created on 9 Feb 2017  Â·  34Comments  Â·  Source: aio-libs/aiohttp

I've just updated FrameworkBenchmarks to 1.3.1 to stop the CancelledError issue (https://github.com/TechEmpower/FrameworkBenchmarks/pull/2561).

As part of that, I ran the benchmarks (locally with vagrant, so not the most rigorous testing setup) to see if there were performance changes. Here's the result:

      test  DB engine  Step    1.2.0    1.3.0   Improvement
      json          -     1     5989     5794           -3%
      json          -     2     6242     5703           -9%
      json          -     3     6325     5776           -9%
      json          -     4     5952     5554           -7%
      json          -     5     6192     5483          -11%
      json          -     6     6123     5331          -13%

        db      aiopg     1     2128     1993           -6%
        db      aiopg     2     2165     2058           -5%
        db      aiopg     3     2175     2030           -7%
        db      aiopg     4     2238     2003          -11%
        db      aiopg     5     2063     1974           -4%
        db      aiopg     6     2071     1911           -8%

        db    asyncpg     1     2782     2552           -8%
        db    asyncpg     2     2741     2737           -0%
        db    asyncpg     3     2735     2677           -2%
        db    asyncpg     4     2762     2630           -5%
        db    asyncpg     5     2821     2607           -8%
        db    asyncpg     6     2772     2527           -9%

     query      aiopg     1     1978     1840           -7%
     query      aiopg     2      707      636          -10%
     query      aiopg     3      389      352           -9%
     query      aiopg     4      264      243           -8%
     query      aiopg     5      199      189           -5%

     query    asyncpg     1     2645     2603           -2%
     query    asyncpg     2     1743     1629           -7%
     query    asyncpg     3      825     1113           35%
     query    asyncpg     4      645      831           29%
     query    asyncpg     5      534      684           28%

   fortune      aiopg     1     1814     1779           -2%
   fortune      aiopg     2     1865     1789           -4%
   fortune      aiopg     3     1882     1797           -4%
   fortune      aiopg     4     1870     1721           -8%
   fortune      aiopg     5     1807     1683           -7%
   fortune      aiopg     6     1767     1514          -14%

   fortune    asyncpg     1     1203     2089           74%
   fortune    asyncpg     2     1189     2074           74%
   fortune    asyncpg     3     1280     2087           63%
   fortune    asyncpg     4     1215     2095           72%
   fortune    asyncpg     5     1230     2112           72%
   fortune    asyncpg     6     1217     2052           69%

    update      aiopg     1     1375     1291           -6%
    update      aiopg     2      381      362           -5%
    update      aiopg     3      196      185           -5%
    update      aiopg     4      130      129           -1%
    update      aiopg     5       97       95           -3%

    update    asyncpg     1     1429     2124           49%
    update    asyncpg     2      780      992           27%
    update    asyncpg     3      501      622           24%
    update    asyncpg     4      378      456           21%
    update    asyncpg     5      298      352           18%

 plaintext          -     1     7159     6592           -8%
 plaintext          -     2     6941     6408           -8%
 plaintext          -     3     6732     6080          -10%
 plaintext          -     4     4651     4655            0%

(All numbers are requests per second as given by their results.json output. The different steps refer to number of queries executed for the DB tests and concurrency for the other tests. Apart from aiohttp, no other packages have changed.)

It seems that that is a general trend of ~10% performance regression however, the code using raw asyncpg queries are consistently much faster.

I know these tests are far from perfect but my questions are:

  1. Would it be possible for aiohttp to have some internal performance benchmarks that are run regularly so it's easy to see the changes in performance of different versions? Thereby making it easier to maintain and improve performance as new features, checks & error handling are added.
  2. What has caused the regression in performance with aiopg and with simple requests? Could it be reversed?
outdated

All 34 comments

that's cool. let me check

By the way, I don't mean to sound negative. Aiohttp's performance is pretty good and from what I can tell with asyncpg it's outperforming all other python frameworks in the test!

it is totally fine. i actually thought about some consistent performance benchmarks. and this tests suit will significantly help me. especially now, i am working on internal refactoring and http pipelining support

@samuelcolvin i did some optimization work on pipelining branch. its about 10% faster than 1.2 on simple requests

all changes are in master now

Thanks, can you link to the commit where this was done?

As per this discussion do you think pipelining will have noticeable effects on real world applications? That discussion suggests it's unlikely to help in reality.

pipelining in python application is a benchmark only tool :)
if you need pipelining to satisfy business requirements then python probably is wrong tool.

@samuelcolvin could you run benchmark again with aiohttp from parser branch?

Will do.

On 15 Feb 2017 7:27 pm, "Nikolay Kim" notifications@github.com wrote:

@samuelcolvin https://github.com/samuelcolvin could you run benchmark
again with aiohttp from parser branch?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/KeepSafe/aiohttp/issues/1614#issuecomment-280112634,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AD2jGU-IfvLYThRLvK6lDgskJHnTmAAAks5rc1GwgaJpZM4L8iiP
.

parser branch is merged to master

I've split the code into a separate repo as running the full tests and displaying the results was a real pain: https://github.com/samuelcolvin/aiohttp-benchmarks

Wow! That's great!

@asvetlov @1st1 interesting that python 3.6 consistently slower than 3.5. i think it should be faster, at least because of new future implementation

Yes, I saw that too and was surprised, I thought 3.6 had performance improvements for asyncio.

There were improvements for asyncio and dicts... Can't see at first sight why it should be slower in 3.6 😓

I've added the results pivoted to compare python version. Change is fairly consistent.

Anny ideas about why the surprising bad results with 3.6?

El 16/02/2017 15:54, "Samuel Colvin" notifications@github.com escribió:

I've split the code into a separate repo as running the full tests and
displaying the results was a real pain: https://github.com/
samuelcolvin/aiohttp-benchmarks

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/KeepSafe/aiohttp/issues/1614#issuecomment-280351997,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABK1id2YijnvE_WocdVDnDF6yDrV5TOcks5rdGMjgaJpZM4L8iiP
.

on my mac i get ~5-7% better performance under python3.6, but my test is very simple

Perhaps someone else could run my benchmarks and confirm I'm not going mad
or have some obscure problem with my python installation?

It takes about 20 mins to run but it's very easy to setup, then can be left
to chomp through all that cases.

On 16 Feb 2017 7:57 pm, "Nikolay Kim" notifications@github.com wrote:

on my mac i get ~5-7% better performance under python3.6, but my test is
very simple

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/KeepSafe/aiohttp/issues/1614#issuecomment-280440900,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AD2jGeJ8vR_jhxywHrwca1KPl3vwZxxrks5rdKoTgaJpZM4L8iiP
.

I will do

FYI JSON and simpletext - the ones that I've tried for curiosity - behave in python3.6 at least equal as python3.5. Python 3.6 could have a better performance but it is something not really appreciable at first sight, something around 5% and to make it sure a serious benchmark should be executed.

In any case, the important thing here I can not reproduce the huge decreasing btw 3.5 and 3.6

Note: I ran the JSON and simpletext tests at least 5 times per version, and I picked up the better time. Otherwise handmade tests sharing the CPU with other user processes might bias the results.

I will run benchmarks on separate aws c3 instances next week

There will need to be significant changes to my code to allow running on separate machines. I'll see if I can make the changes tomorrow.

I've modified the benchmark code to run the server remotely and rerun the test: https://github.com/samuelcolvin/aiohttp-benchmarks

python 3.5 vs. 3.6 is much closer but still the trend is that 3.6 is single digit percentage points slower.

Obviously running the test uses up CPU credits pretty quickly but I was careful to make sure the tests finished before the server ran out of credits.

We are observing memory leak and 100% CPU usage with 1.3.3. It happens on Linux - on OS X I don't see such problems. Version 1.2.0 works good on both Linux and OS X.

I'm investigating this issue, if there's anyone else with same problem - show your setup. I'm trying to write minimal code needed to reproduce this issue.

Is it on server or client? Do you see CPU usage immidietly or after some time?

Btw please create new ticket

@samuelcolvin could you create new PR for FrameworkBenchmarks with aiohttp 2.0

btw maybe you want to merge aiohttp-benchmarks into aiohttp? or move it to aio-libs?

Was just thinking about this and was waiting for 2 release. Will do.

I think this can be closed too, any further discussion should happen on new issue.

btw maybe you want to merge aiohttp-benchmarks into aiohttp? or move it to aio-libs?

For me it's not part of the framework so it should be a separate repo. I'll transfer it. I think we should also delete (or move) the current benchmarks directory

agree on benchmarks directory

I'll wait 24 hours in case the release causes immediate problems which need fixing with patch releases.

Congratulations on 2.0.0 :tada:

Benchmarks moved into this org: https://github.com/aio-libs/aiohttp-benchmarks.

FrameworkBenchmarks updated (PR pending): https://github.com/TechEmpower/FrameworkBenchmarks/pull/2609

Awesome! Thanks!

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a [new issue] for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that [new issue].

Was this page helpful?
0 / 5 - 0 ratings