Hello,
Apparently modifying a scriptable architecture (modifications working perfectly both for inference and training), in this case maskrcnn_resnet50_fpn, makes it unscriptable.
Initially, I tried this, since in script mode, GeneralizedRCNN returns both the loss_dict and detections, which I was after. But in the process, I ran into this torch.jit.script error.
I understand this might not be the core purpose of torch.jit.script but perhaps a clearer Error might help? (or somehow allowing scripting for modified architecture)
Steps to reproduce the behavior:
import torch
from torchvision.models.detection import maskrcnn_resnet50_fpn
model = maskrcnn_resnet50_fpn()
# The following modifications cause the last command to fail
model.roi_heads.mask_roi_pool = None
model.roi_heads.mask_head = None
model.roi_heads.mask_predictor = None
model_script = torch.jit.script(model)
which throws the following
~/miniconda3/lib/python3.7/site-packages/torch/jit/__init__.py in script(obj, optimize, _frames_up, _rcb)
1253
1254 if isinstance(obj, torch.nn.Module):
-> 1255 return torch.jit._recursive.recursive_script(obj)
1256
1257 qualified_name = _qualified_name(obj)
~/miniconda3/lib/python3.7/site-packages/torch/jit/_recursive.py in recursive_script(nn_module)
532 check_module_initialized(nn_module)
533
--> 534 return create_script_module(nn_module, infer_methods_to_compile(nn_module))
535
536 def try_compile_fn(fn, loc):
~/miniconda3/lib/python3.7/site-packages/torch/jit/_recursive.py in create_script_module(nn_module, stubs)
291 """
292 check_module_initialized(nn_module)
--> 293 concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
294 cpp_module = torch._C._create_module_with_type(concrete_type.jit_type)
295
~/miniconda3/lib/python3.7/site-packages/torch/jit/_recursive.py in get_or_create_concrete_type(self, nn_module)
234 return nn_module._concrete_type
235
--> 236 concrete_type_builder = infer_concrete_type_builder(nn_module)
237
238 nn_module_type = type(nn_module)
~/miniconda3/lib/python3.7/site-packages/torch/jit/_recursive.py in infer_concrete_type_builder(nn_module)
119 else:
120 # otherwise we get the concrete module type for item and add it to concrete_type
--> 121 sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
122 concrete_type_builder.add_module(name, sub_concrete_type)
123
~/miniconda3/lib/python3.7/site-packages/torch/jit/_recursive.py in get_or_create_concrete_type(self, nn_module)
234 return nn_module._concrete_type
235
--> 236 concrete_type_builder = infer_concrete_type_builder(nn_module)
237
238 nn_module_type = type(nn_module)
~/miniconda3/lib/python3.7/site-packages/torch/jit/_recursive.py in infer_concrete_type_builder(nn_module)
116 if attr_type is not None:
117 # if the type can be inferred, it should be a module interface type
--> 118 sub_concrete_type = torch._C.ConcreteModuleType.from_jit_type(attr_type)
119 else:
120 # otherwise we get the concrete module type for item and add it to concrete_type
RuntimeError: type->cast<ClassType>() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/script/concrete_module_type.cpp:44, please report a bug to PyTorch. (fromJitType at /pytorch/torch/csrc/jit/script/concrete_module_type.cpp:44)
Expected torch.jit.script to work on modified architectures which have working torch.nn.Modulemethods.
- PyTorch Version: 1.4.0
- Torchvision Version: 0.5.0
- OS: Ubuntu 18.04.3 LTS
- How you installed PyTorch: `pip`
- Python version: 3.7
- CUDA/cuDNN version: CUDA 10.1.168 (cuDNN 7.6.2)
- GPU models and configuration: GeForce GTX 1050 (driver: 430.64)
Thanks for the bug report!
cc @eellison this error seems unexpected as an error, as the model should be equivalent to faster_rcnn, which works in torchscript.
Yea, looks like a bug. I'll take a look
Follow https://github.com/pytorch/pytorch/issues/32469 for progress
@frgfm this has been fixed on master
Thanks a lot @eellison!
Most helpful comment
@frgfm this has been fixed on master