Pytorch-lightning: Nomenclature: reload dataloaders every epoch

Created on 7 Nov 2020  路  7Comments  路  Source: PyTorchLightning/pytorch-lightning

Simple nomenclature fix:

Since the trainer flag reload_dataloaders_every_epoch reloads only the training dataloader, as opposed to validation and training dataloaders (as implemented here), wouldn't it be better to change the nomenclature to reload_dataloader_every_epoch?

Working as intended question

All 7 comments

Hi! thanks for your contribution!, great first issue!

HI @awaelchli,

Thanks a lot for the clarification. Is there a way I can force it to reload only the training set?

The only way I currently see is this:

# 1. Set reload_dataloaders_every_epoch=True
# 2. Store a reference to your val_dataloader in LightningModule and return it

def train_dataloader(self):
    # init train dataloader as usual
    ... 
    return train_dataloader

def val_dataloader(self):
    # will return a reference instead of recreating dataset

    if self.val_dataloader_ref:
       return self.val_dataloader_ref

    # init dataloader as usual
    ...
    self.val_dataloader_ref = val_dataloader
    return val_dataloader

Great! Thanks a lot for the idea!
As far as this issue is concerned, I would suggest closing it

you're welcome and let me know if you run into further problems/questions.

Thanks for your consideration @awaelchli!

Was this page helpful?
0 / 5 - 0 ratings