Fastapi: Use Fast.api with fastai

Created on 21 Oct 2020  路  7Comments  路  Source: tiangolo/fastapi

First check

*
I want to use fast.api as the api for my model predict, my model uses fast.ai technology but I have trouble combining the two above, I will describe the code below.

import asyncio
from typing import Optional

from PIL import Image
from fastai.learner import load_learner
from fastapi import FastAPI

from face_analyst import faceAnalysis

app = FastAPI()
siamese = load_learner('models/ir152_balance-1.pkl', cpu=True)


@app.get("/")
def read_root():
    faceAnalysis.init_app(siamese)

    s_img = Image.open('mint.jpg').convert('RGB')
    sfaces, sboxes, _ = faceAnalysis.evlo_face_detector(s_img)

    return {"Hello": "World"}

Description

WHEN I RUN SERVER With uvicorn, there will be an error related to load_learner related to the main process

File "./fast_api.py", line 7, in <module>
    siamese = load_learner('models/ir152_balance-1.pkl', cpu=True)
  File "/home/eway2020/.pyenv/versions/3.8.0/envs/face_analyst/lib/python3.8/site-packages/fastai/learner.py", line 549, in load_learner
    res = torch.load(fname, map_location='cpu' if cpu else None)
  File "/home/eway2020/.pyenv/versions/3.8.0/envs/face_analyst/lib/python3.8/site-packages/torch/serialization.py", line 584, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/home/eway2020/.pyenv/versions/3.8.0/envs/face_analyst/lib/python3.8/site-packages/torch/serialization.py", line 842, in _load
    result = unpickler.load()
AttributeError: Can't get attribute 'siamese_splitter' on <module '__mp_main__' from '/home/eway2020/.pyenv/versions/face_analyst/bin/uvicorn'>

Environment

  • OS: Linux
  • FastAPI Version 0.61.1
  • Python version 3.6.8
question

Most helpful comment

@includeamin does Python version makes any difference in his situation?

Not exactly.
I Just try to fix the wrong information in the issue.:)

All 7 comments

I think the version of your python used is 3.8, not 3.6.8

This is an issue with how you are trying to load your learner. You don't have a siamese_splitter function for it to load: https://docs.fast.ai/learner#load_learner

@includeamin does Python version makes any difference in his situation?

@includeamin does Python version makes any difference in his situation?

Not exactly.
I Just try to fix the wrong information in the issue.:)

@includeamin does Python version makes any difference in his situation?

Not exactly.
I Just try to fix the wrong information in the issue.:)

Yes ! i am wrong but all versions are the same. Back to problem, Do you have any ideas to fix this error ?

@includeamin does Python version makes any difference in his situation?

Not exactly.
I Just try to fix the wrong information in the issue.:)

Yes ! i am wrong but all versions are the same. Back to problem, Do you have any ideas to fix this error ?

maybe I can help you when I have the source code. from face_analyst import faceAnalysis and models/ir152_balance-1.pkl.
Here is my email: [email protected]. You can email them . if you want

@ycd I've worked all day on this and I've narrowed down the issue:

So here's what we have going on, we're trying to import in our app while a multiprocessing pool is active. As a result not every pool winds up having the function we want, leading to our errors. What I can't (for the life of me) figure out is how to have our learn object be created and returned while in the middle of the processes. This should be impossible I _think_.

Let me know if that helps a little

We can get past this without using fastapi with the following snippet (IE native starlette):

async def setup_learner():
    await download_file(export_file_url, path / export_file_name)
    try:
        learn = torch.load(path/export_file_name, map_location=torch.device('cpu'))
        learn.dls.device = 'cpu'
        return learn
    except RuntimeError as e:
        if len(e.args) > 0 and 'CPU-only machine' in e.args[0]:
            print(e)
            message = "\n\nThis model was trained with an old version of fastai and will not work in a CPU environment.\n\nPlease update the fastai library in your training environment and export your model again.\n\nSee instructions for 'Returning to work' at https://course.fast.ai."
            raise RuntimeError(message)
        else:
            raise


loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(setup_learner())]
learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
loop.close()
Was this page helpful?
0 / 5 - 0 ratings