Training with your config, you use norm_eval=True to freeze bn, why not settingnormalize=dict(type='BN', frozen=True)? Is there any difference?
frozen: stop gradient update in norm layers
norm_eval: stop moving average statistics update in norm layers
Why need to stop moving average statistics update in norm layers when training model? As far as I know,BN layers usually need to record moving average statistics which is to be used when testing model.
Most helpful comment
frozen: stop gradient update in norm layersnorm_eval: stop moving average statistics update in norm layers