Has pytorch2onnx.py ever been used to convert a standard Faster-RCNN model? While I understand it's still an experimental feature, I thought I could use pytorch2onnx to convert my Faster-RCNN model but it did not work.
To provide more context, these are the things I've tried:
I tried running pytorch2onnx.py however this creates an invalid graph and fails the onnx check (onnx.checker.check_model(onnx_model)) due to two Constant nodes containing the device name ('cpu'). The same error as in #2299. As I stated in #2299, I was able to create a valid graph by removing these two nodes.
However, I was not able to load this onnx model. I tried using onnxruntime and caffe2 onnx backend.
Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: ATen is not a registered function/opDon't know how to translate op roi_align which I guess makes sense since RoiAlign support was added opset 10 onwards. However I thought it would fall back to torchvision ops to enable conversion.I had also tried #1386 which did allow me to create a onnx model and run inference. However, the model was giving the same outputs for different inputs. Has the team thought of integrating that pull request or is pytorch2onnx enough?
Pls. try with the latest mmcv and mmdetection. Reopen it if you still have any issue.
Most helpful comment
To provide more context, these are the things I've tried:
I tried running
pytorch2onnx.pyhowever this creates an invalid graph and fails the onnx check (onnx.checker.check_model(onnx_model)) due to two Constant nodes containing the device name ('cpu'). The same error as in #2299. As I stated in #2299, I was able to create a valid graph by removing these two nodes.However, I was not able to load this onnx model. I tried using
onnxruntimeand caffe2 onnx backend.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: ATen is not a registered function/opDon't know how to translate op roi_alignwhich I guess makes sense since RoiAlign support was added opset 10 onwards. However I thought it would fall back to torchvision ops to enable conversion.I had also tried #1386 which did allow me to create a onnx model and run inference. However, the model was giving the same outputs for different inputs. Has the team thought of integrating that pull request or is
pytorch2onnxenough?