Pytorch-lightning: How to properly fix random seed with pytorch lightning?

Created on 22 Apr 2020  路  5Comments  路  Source: PyTorchLightning/pytorch-lightning

What is your question?

Hello guys
I wonder how to fix seed to get reproducibility of my experiments

Right now I'm using this function before the start of the training

def seed_everything(seed=42):
    random.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

But it doesn't work.
I run training in DDP mode if it is somehow important.

Thanks in advance!

What's your environment?

  • OS: Ubuntu 18.04
  • Packaging: pip
  • Version: 0.7.1
question

Most helpful comment

@awaelchli tried and failed

All 5 comments

Also have the same problem without DDP mode.

What's your environment?

  • OS: Ubuntu 18.04
  • Packaging: pip
  • Version: 0.7.3

Could you set num workers to 0 to see if it is related to the dataloading? I had this problem before with regular pytorch and I think I solved it by setting the seed also in the dataloading, because each subprocess would have its own seed.

@awaelchli tried and failed

Is there a chance you could share a colab with a minimal example? If not I will try to reproduce with the pl_exampels this weekend when i get to it.

In my case, it is caused by dropout.
I seed everything again in the spawed process before training fix the problem basically.
you can do this in on_train_start hook

Was this page helpful?
0 / 5 - 0 ratings

Related issues

anthonytec2 picture anthonytec2  路  3Comments

chuong98 picture chuong98  路  3Comments

williamFalcon picture williamFalcon  路  3Comments

jcreinhold picture jcreinhold  路  3Comments

remisphere picture remisphere  路  3Comments