does pytorch1.0 support Synchronized BatchNorm, and in this code the FrozenBatchNorm has the same function as Synchronized BN?
why use FrozenBatchNorm, as same thirdparty faster rcnn implement's backbone resnet do'nt has this feature
Hi,
PyTorch 1.0 currently doesn't support Synchronized Batch Norm, but there are discussions on how to support it, see for example https://github.com/pytorch/pytorch/issues/2584 and https://github.com/pytorch/pytorch/issues/12198
Because the discussion on how to support Synchronized Batch Norm is still ongoing, we decided to follow the Detectron implementation of freezing batch norm statistics during training so that we don't have issues when training with small batch sizes.
A possible solution for now would be to train using GroupNorm, which makes training with small batches possible
Added now in PyTorch master via https://github.com/pytorch/pytorch/pull/14267
For documentation, see: https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm
Most helpful comment
Added now in PyTorch master via https://github.com/pytorch/pytorch/pull/14267
For documentation, see: https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm