Since the trainer flag reload_dataloaders_every_epoch reloads only the training dataloader, as opposed to validation and training dataloaders (as implemented here), wouldn't it be better to change the nomenclature to reload_dataloader_every_epoch?
Hi! thanks for your contribution!, great first issue!
Hi
it also reloads the other dataloaders.
See in your link a few lines down:
https://github.com/PyTorchLightning/pytorch-lightning/blob/f63fec9323c319720d78b452e9fe84b97ce7644e/pytorch_lightning/trainer/training_loop.py#L264
HI @awaelchli,
Thanks a lot for the clarification. Is there a way I can force it to reload only the training set?
The only way I currently see is this:
# 1. Set reload_dataloaders_every_epoch=True
# 2. Store a reference to your val_dataloader in LightningModule and return it
def train_dataloader(self):
# init train dataloader as usual
...
return train_dataloader
def val_dataloader(self):
# will return a reference instead of recreating dataset
if self.val_dataloader_ref:
return self.val_dataloader_ref
# init dataloader as usual
...
self.val_dataloader_ref = val_dataloader
return val_dataloader
Great! Thanks a lot for the idea!
As far as this issue is concerned, I would suggest closing it
you're welcome and let me know if you run into further problems/questions.
Thanks for your consideration @awaelchli!