Replace *_percent_check
with limit_*_batches
and redesign overfit_pct
Trainer arguments.
Over the past few days I was struggling to use these parameters in my tasks. For a example, I want to run a test on model overfitting for an image classification problem. I would take a few (say, 2) batches from my train dataset and train several epochs on them. Then I would assert 100% accuracy on these train batches.
The problems I stumbled on are the following:
overfit_pct
documentation is misleading. Recently a clarification was made that it sets *_percent_check
parameters to a given value, but it still doesn't actually help to overfit a model since you can't simply run trainer.test()
or trainer.run_evaluation()
without manipulating model's dataloaders after running trainer.fit(model)
. val_percent_check
is too small, which actually can happen if you use overfit_pct
with small training dataset in mind, you just silently skip validation loop and run into an exception in model.validation_epoch_end
trying to accumulate loss for batches. Yeah, handling the latter is reasonably on me since I override this method but it will be much nicer if such unexpected loop-skip is checked by Pytorch-Lightning. You guys are great and I want to love your project even more!train_percent_check
doesn't guarantee training on the same small part of the training dataset for every epoch because it is a best practice to shuffle your training data every epoch. As a result, new batches are formed every epoch and thus no overfitting :( overfit_pct
is either removed or actually redesigned to help test overfitting, i.e. replacing validation or test loader with train loader. Ensure that training dataset isn't shuffled and the same batches are trained on every epoch.I realize that you cannot prohibit shuffling in case of using simply *_percent_check
parameters. There can be experiments where you would like to see how your model performs training only on a portion of data. Therefore such prohibition is valid only for overfit mode.
Couldn't agree more with the shuffling part, the overfit_pct
doesn't control shuffle
flag for the train_dataloader. I think that should really be set by lightning & not by the user to make this flag as as the do-it-all as it claims for overfitting.
Fixed!
Will be available in 0.8.0
All of these were fixed in #2213 and #2220.
Thanks for the suggestions! keep them coming.
Most helpful comment
All of these were fixed in #2213 and #2220.
Thanks for the suggestions! keep them coming.