Hi,
Are the pretrained models reported by torchvision using the same hyper-parameters as https://github.com/pytorch/examples/blob/master/imagenet/main.py? I used the default hyper-parameters to train mobilenet_v2, but the results were much worse than reported.
Thanks
You can find the training scripts for all torchvision models here. For classification this is the script you want
https://github.com/pytorch/vision/blob/master/references/classification/train.py
As @pmeier mentioned, we also provide the default hyperparameters for the pre-trained models in torchvision under the references folder.
For mobilenet_v2 we used https://github.com/pytorch/vision/tree/master/references/classification#mobilenetv2
python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py\
--model mobilenet_v2 --epochs 300 --lr 0.045 --wd 0.00004\
--lr-step-size 1 --lr-gamma 0.98
Let us know if you have further questions
Hello,
I have a similar issue. I am using pretrained AlexNet and VGG models from torchvision for a scientific paper and, in order to interpret my results, I would like to know how the models were trained. I have checked here as suggested, but I am unable to find any reference to VGG19, its shallower variants, and AlexNet. Are they published anywhere else?
Thank you
I expect these models were trained with the default parameters given in train.py, but I can't be sure. @fmassa ?
AlexNet and VGG have been trained a long time ago by @colesbury , I think they might follow the same procedure as ResNet (and thus default parameters), but I'm not 100% sure. Original PR adding those is inn https://github.com/pytorch/vision/pull/23
Models with batch normalization were trained with the default parameters. Models without batch normalization were trained with an initial learning rate of 0.01 (i.e. 1/10th the default learning rate).
See https://github.com/pytorch/examples/tree/master/imagenet#training
Should we add this in the classification reference README? If yes, I could send a PR tomorrow.
@pmeier yes please, if you could send a PR improving the README it would be great
Most helpful comment
Models with batch normalization were trained with the default parameters. Models without batch normalization were trained with an initial learning rate of 0.01 (i.e. 1/10th the default learning rate).
See https://github.com/pytorch/examples/tree/master/imagenet#training