Vision: ImageNet pre-trained model code and hyper-parameters

Created on 14 Jul 2020  路  1Comment  路  Source: pytorch/vision

Hi,

Is the code used to trained the torchvision models (especially Resnet) on ImageNet available ? What are the hyper-parameters used ? Did you use specific methods (dropout, weight decay, specific augmentation such as cutout...etc) during training ?

Thank you very much

reference scripts question

Most helpful comment

Hi @Jobanan, you can find the training scripts for all pre-trained models here:

https://github.com/pytorch/vision/tree/master/references

The hyper parameters are displaying in the README, but for everything else you have to dig into the actual training script. For example, we used the following image transformation during training:

https://github.com/pytorch/vision/blob/0344603ece20ba5e4086aa8c49b2cce0829aefe6/references/classification/train.py#L96-L103

Let me know if you have further questions.

>All comments

Hi @Jobanan, you can find the training scripts for all pre-trained models here:

https://github.com/pytorch/vision/tree/master/references

The hyper parameters are displaying in the README, but for everything else you have to dig into the actual training script. For example, we used the following image transformation during training:

https://github.com/pytorch/vision/blob/0344603ece20ba5e4086aa8c49b2cce0829aefe6/references/classification/train.py#L96-L103

Let me know if you have further questions.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

fmassa picture fmassa  路  30Comments

ppwwyyxx picture ppwwyyxx  路  33Comments

sumanthratna picture sumanthratna  路  28Comments

Jonas1312 picture Jonas1312  路  23Comments

zsef123 picture zsef123  路  23Comments