Why are we setting model[-1].export = True in export.py?
model.model[-1].export = True # set Detect() layer export=True
Why I am asking this as the prediction using the model, with and without setting the export is different. So,
model = torch.load(opt.weights, map_location=torch.device('cpu'))['model'].float()
model.eval()
model.model[-1].export = True # set Detect() layer export=True
img = torch.zeros((1,3,224,224)
pred = model(image)
Here, pred[0].shape = torch.Size([1,3,7,7,85])
Now, when I do,
model = torch.load(opt.weights, map_location=torch.device('cpu'))['model'].float()
model.eval()
img = torch.zeros((1,3,224,224)
pred = model(image)
Here, pred[0].shape = torch.Size([1,3087,85])
Can anyone please explain this. I guess it is to generalize for any input shape.
Also, if setting the export=True is the correct way to export, how to get the correct output of shape ([1,3087,85]), as this is further used for non-max compression and matrix of ([1,3,7,7,85]) cannot be used.
I think w/ the export flag it returns training output, while w/o the flag it returns inference output. So you shouldn't set this flag for inferencing. plz see line 34 @https://github.com/ultralytics/yolov5/blob/master/models/yolo.py
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Most helpful comment
I think w/ the export flag it returns training output, while w/o the flag it returns inference output. So you shouldn't set this flag for inferencing. plz see line 34 @https://github.com/ultralytics/yolov5/blob/master/models/yolo.py