Mmdetection: what should I do if I want to inference on cpu? thanks for a lot

Created on 16 Nov 2019  路  11Comments  路  Source: open-mmlab/mmdetection

I have train a model on GPU . what should I do if I want to inference on cpu? thanks for a lot

Most helpful comment

I have created a pull request (https://github.com/open-mmlab/mmdetection/pull/2199) which enables to run inference in CPU mode, hope it will get into the master branch soon.

All 11 comments

I actually created separate project for that. Currently it's retinanet and mask-rcnn (working on more); but may be helpful: https://github.com/akarazniewicz/smd

I have train a model on GPU . what should I do if I want to inference on cpu? thanks for a lot

you can convert your weight and model to .onnx,then convert the .onnx to .xml and .bin file.Finally you can inference on CPU.

I have created a pull request (https://github.com/open-mmlab/mmdetection/pull/2199) which enables to run inference in CPU mode, hope it will get into the master branch soon.

@zhaojw219
Is there a detailed tutorial for yours approach?

Thanks to @yossibiton 's contribution, CPU inference is supported.
You may run demo/image_demo.py with --device cpu argument.

@xvjiarui Sorry, maybe I doing something wrong, but it didn't work for me.

I trained model(faster_rcnn_r50_fpn.py) on machine with GPU. Then I switched to machine with CPU only and followed instructions in install.md and typed following commands:

1. conda create -n open-mmlab python=3.7 -y
2. conda activate open-mmlab
3. conda install pytorch torchvision cpuonly -c pytorch
4. git clone https://github.com/open-mmlab/mmdetection.git
5. cd mmdetection
6. pip install -r requirements/build.txt
7. pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
8. pip install -v -e .

After that I typed command python demo/image_demo.py ../output/example.png ../configs/my_config.py ../output/epoch_250.pth --device cpu and I got an error:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Did I do something incorrectly?

@PihtaHorse Have you digged into the code?
seems like
please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
is solution

@korabelnikov Yes, it works if you replace one line here:

checkpoint = _load_checkpoint(filename, map_location=torch.device('cpu'))
by
checkpoint = _load_checkpoint(filename, map_location)

But it fails with same error as above if I using it in such way:
model = init_detector(config_file, checkpoint_file, device=torch.device('cpu'))

And the first solution does not look, to put it mildly, elegant. Maybe it is possible somehow directly indicate that I want to use the model on the CPU?

@PihtaHorse You should rebuild with mmdet. run this:
rm -r build and python setup.py develop. It works for me.

@xvjiarui I add device with this line here by:
checkpoint = load_checkpoint(model, checkpoint,map_location=device)
It works if only cpu inference.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

tianxinhang picture tianxinhang  路  3Comments

fmassa picture fmassa  路  3Comments

fatLime picture fatLime  路  3Comments

happog picture happog  路  3Comments

liugaolian picture liugaolian  路  3Comments