Mmdetection: Onnx Conversion for Faster-RCNN

Created on 7 Apr 2020  路  2Comments  路  Source: open-mmlab/mmdetection

Has pytorch2onnx.py ever been used to convert a standard Faster-RCNN model? While I understand it's still an experimental feature, I thought I could use pytorch2onnx to convert my Faster-RCNN model but it did not work.

ONNX

Most helpful comment

To provide more context, these are the things I've tried:

I tried running pytorch2onnx.py however this creates an invalid graph and fails the onnx check (onnx.checker.check_model(onnx_model)) due to two Constant nodes containing the device name ('cpu'). The same error as in #2299. As I stated in #2299, I was able to create a valid graph by removing these two nodes.

However, I was not able to load this onnx model. I tried using onnxruntime and caffe2 onnx backend.

  1. Loading with onnxruntime fails with Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: ATen is not a registered function/op
  2. Loading with Caffe2 Onnx backend (opset 9) fails with Don't know how to translate op roi_align which I guess makes sense since RoiAlign support was added opset 10 onwards. However I thought it would fall back to torchvision ops to enable conversion.

I had also tried #1386 which did allow me to create a onnx model and run inference. However, the model was giving the same outputs for different inputs. Has the team thought of integrating that pull request or is pytorch2onnx enough?

All 2 comments

To provide more context, these are the things I've tried:

I tried running pytorch2onnx.py however this creates an invalid graph and fails the onnx check (onnx.checker.check_model(onnx_model)) due to two Constant nodes containing the device name ('cpu'). The same error as in #2299. As I stated in #2299, I was able to create a valid graph by removing these two nodes.

However, I was not able to load this onnx model. I tried using onnxruntime and caffe2 onnx backend.

  1. Loading with onnxruntime fails with Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: ATen is not a registered function/op
  2. Loading with Caffe2 Onnx backend (opset 9) fails with Don't know how to translate op roi_align which I guess makes sense since RoiAlign support was added opset 10 onwards. However I thought it would fall back to torchvision ops to enable conversion.

I had also tried #1386 which did allow me to create a onnx model and run inference. However, the model was giving the same outputs for different inputs. Has the team thought of integrating that pull request or is pytorch2onnx enough?

Pls. try with the latest mmcv and mmdetection. Reopen it if you still have any issue.

Was this page helpful?
0 / 5 - 0 ratings