hello, I fine tune 'cascade_rcnn_x101_64x4d_fpn_1x' on my own datasets to detect 4 classes(img_scale:(1280x1024)). After 13 training epochs, the size of generated weights is 1G, which is quite larger than the size of pre_trained model(rcnn_x101_64x4d_fpn_1x:500M). Then I change the img_scale from (1280x1024) to (320x256),the size of generated model weights is still nearly 900M. I wonder if there any methods to minimize the model weights and decrease the inference time in prediction?
Thank you!
It is irrelated to img_scale. mmdetection will save optimizer's status by default, which is almost the same size as model's parameters. So if you don't want optimizer's status, just setcheckpoint_config = dict(interval=1, save_optimizer=False). It should halve the saved pth's size.
It is irrelated to
img_scale.mmdetectionwill save optimizer's status by default, which is almost the same size as model's parameters. So if you don't want optimizer's status, just setcheckpoint_config = dict(interval=1, save_optimizer=False). It should halve the savedpth's size.
It does works! Thank you very much!
If the problem has been solved, please close the issue.
Feel free to reopen.
Most helpful comment
It is irrelated to
img_scale.mmdetectionwill save optimizer's status by default, which is almost the same size as model's parameters. So if you don't want optimizer's status, just setcheckpoint_config = dict(interval=1, save_optimizer=False). It should halve the savedpth's size.