Yolov5: model.torchscript.pt on mobile pytroch outputs only training tensors and not inference tensor.

Created on 16 Nov 2020  路  4Comments  路  Source: ultralytics/yolov5

question

Most helpful comment

I guess the reason is here:
https://github.com/ultralytics/yolov5/blob/master/models/yolo.py#L46

Yes, for a temporary fix we just replaced that line with self.training = False during export and it seems to work fine.

All 4 comments

Hi,
I guess the reason is here:
https://github.com/ultralytics/yolov5/blob/master/models/yolo.py#L46

because here https://github.com/ultralytics/yolov5/blob/master/models/export.py#L50 is export set to true

The reason is probably that the jit.trace convert tensors to constants and torchscript model can correctly process only square inputs. Sample input to trace is by default blank image 640x640.

If you put non-square image to torchscript loaded model it will die on:

RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/models/yolo.py", line 45, in forward
    _35 = (_4).forward(_34, )
    _36 = (_2).forward((_3).forward(_35, ), _29, )
    _37 = (_0).forward(_33, _35, (_1).forward(_36, ), )
           ~~~~~~~~~~~ <--- HERE
    _38, _39, _40, _41, = _37
    return (_41, [_38, _39, _40])
  File "code/__torch__/models/yolo.py", line 75, in forward
    _52 = torch.sub(_51, CONSTANTS.c3, alpha=1)
    _53 = torch.to(CONSTANTS.c4, dtype=6, layout=0, device=torch.device("cpu"), pin_memory=None, non_blocking=False, copy=False, memory_format=None)
    _54 = torch.mul(torch.add(_52, _53, alpha=1), torch.select(CONSTANTS.c5, 0, 0))
                    ~~~~~~~~~ <--- HERE
    _55 = torch.slice(y, 4, 0, 2, 1)
    _56 = torch.expand(torch.view(_54, [3, 80, 80, 2]), [1, 3, 80, 80, 2], implicit=True)

Traceback of TorchScript, original code (most recent call last):
./models/yolo.py(53): forward
/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
./models/yolo.py(137): forward_once
./models/yolo.py(117): forward
/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
/.venv/lib/python3.8/site-packages/torch/jit/_trace.py(934): trace_module
/.venv/lib/python3.8/site-packages/torch/jit/_trace.py(733): trace
models/export.py(57): <module>
RuntimeError: The size of tensor a (56) must match the size of tensor b (80) at non-singleton dimension 2

If you put squared image to input (e.g. 800x800) prediction will be successful.

see #1217

I guess the reason is here:
https://github.com/ultralytics/yolov5/blob/master/models/yolo.py#L46

Yes, for a temporary fix we just replaced that line with self.training = False during export and it seems to work fine.

I guess the reason is here:
https://github.com/ultralytics/yolov5/blob/master/models/yolo.py#L46

Yes, for a temporary fix we just replaced that line with self.training = False during export and it seems to work fine.
when export torchscript .pt, i also got this error. for me, set self.training = False, it work

Was this page helpful?
0 / 5 - 0 ratings

Related issues

krishnam3065 picture krishnam3065  路  4Comments

nanometer34688 picture nanometer34688  路  3Comments

maykulkarni picture maykulkarni  路  3Comments

cswwp picture cswwp  路  4Comments

hktxt picture hktxt  路  3Comments