Fastapi: application performance with response_model

Created on 10 May 2019  Â·  7Comments  Â·  Source: tiangolo/fastapi

Issue

I found that when use response_model, application performance dropped dramatically.

no response_model is about 4 times than having response_model in efficiency

How should I solve this problem?

Is my way of using it incorrect?

relative information

py-version: python 3.7

FastAPI-version: lastest

screenshot:

no response_model

image

with response_model
image

model code

class ResponseSchema(BaseModel):
    code: int
    message: str

# config skip_defaults: True
# NovelSchema field count about 10
# ChapterSchema field count about 7
class BookCatalogData(BaseSchema):
    novel: NovelSchema = SchemaValue(None)
    chapters: List[ChapterSchema] = SchemaValue(None)

class BookCatalogRsm(ResponseSchema):
    data: BookCatalogData

controller code

novel field contain 6 field

chapters field contain about 700 items

@app.get(
    '/v1/api/book/catalog',
    response_model=BookCatalogRsm,
    tags=[TAG]
)
async def book_catalog(novelId: int = -1):
    chapters = await services.chapter.get_catalog(novelId)
    novel = await services.novel.get_by_id(
        novelId,
        columns=[services.novel.table.c.id, services.novel.table.c.name]
    )
    return return_ok({
        'novel': novel,
        'chapters': chapters
    })

helping

Thank you for your help.

question

Most helpful comment

Yeah, using a response model currently adds a lot of overhead in order to make sure extra fields don’t get serialized. The fix will be to more carefully walk the structure so that fields don’t have to be reparsed.

Fixing this is my top fastapi priority I just haven’t been able to find time to focus on it, but I’m hoping to soon.

All 7 comments

I found the reason. it is not fastapi's problem! I'm so sorry!

out of curiosity what was the reason ?
for timing purposes you can use timing-asgi like this

from timing_asgi import TimingMiddleware, TimingClient
from timing_asgi.integrations import StarletteScopeToName

class PrintTimings(TimingClient):
    def timing(self, metric_name, timing, tags):
        logger.debug(f"{metric_name}, {timing}, {tags}")

app.add_middleware(
    TimingMiddleware,
    client=PrintTimings(),
    metric_namer=StarletteScopeToName(prefix="app", starlette_app=app)
)

@euri10
caused by the print method.
I use it to print result, so that the application performance dropped dramatically.

However, when I increase the number of entries, if I add response_model, the efficiency still affected. about one time for the no response_model!

So I guess, I should not use the response_model, but the fastapi can't produce a response model document for redoc or swagger.

Thanks for reporting back and closing the issue.


The response_model will filter the contents of your JSON to make sure you only return the appropriate data (for example, removing a property hashedpassword if you had one). It also converts the data to the correct type (as declared in the Pydantic model).

If you remove the response_model, then you have to do all those validations, and serialization by hand, in your code. Moving the overhead from FastAPI to your own code.

If you need to improve performance, you can also use UJSONResponse: https://fastapi.tiangolo.com/tutorial/custom-response/#use-ujsonresponse

I've found another case where performance drops 6-7 times!
Prerequisites:

  1. response must be a long list of models (several hundreds at least)
  2. probably nested models are required
  3. build the response as:
reponse = [
  MyModel1(
    nested_model=MyModel2(**some_dict), **some_other_dict
  ) for _ in range(500)
]
  1. declare response_model=List[MyModel1]
  2. run load test

Using load tests I've checked performance using (best to worst):

  1. no response_model, mean is around 70ms
reponse = [
  MyModel1(
    nested_model=MyModel2(**some_dict), **some_other_dict
  ) for _ in range(500)
]
  1. response_model=List[MyModel1], type of nested_model doesn't affect performance much, mean is around 120ms
reponse = [
  dict(
    nested_model=some_dict, **some_other_dict
  ) for _ in range(500)
]
  1. response_model=List[MyModel1], mean is around 780ms !!!
reponse = [
  MyModel1(
    nested_model=MyModel2(**some_dict), **some_other_dict
  ) for _ in range(500)
]

I expected the case 3 to be of the same performance as case 2. This difference tells me that there is no check if field type actually matches an expected one.

And yes, I'm using UJSONResponse from the very beginning.

Yeah, using a response model currently adds a lot of overhead in order to make sure extra fields don’t get serialized. The fix will be to more carefully walk the structure so that fields don’t have to be reparsed.

Fixing this is my top fastapi priority I just haven’t been able to find time to focus on it, but I’m hoping to soon.

Was this page helpful?
0 / 5 - 0 ratings