Pytorch-lightning: How to return a final val loss in trainer?

Created on 25 May 2020  路  4Comments  路  Source: PyTorchLightning/pytorch-lightning

What is your question?

Most optimisation packages ie Ray Tune / Hyperopt return the train loop to return a final accuracy for the optimiser to decide what to try next.

How do I do this with the Trainer module for Pytorch Lightning?

What's your environment?

  • OS: Linux
  • Packaging pip
  • Version 0.7.6
Priority P0 enhancement question

Most helpful comment

let鈥檚 formally support this somehow so it doesn鈥檛 have to be hacker around :)

All 4 comments

I too have the same issue , I want to return "val_loss" in validation step and "avg_val_loss" from validation_epoch_end

https://github.com/PyTorchLightning/pytorch-lightning/issues/321

`def validation_end(self, outputs):
avg_loss = torch.stack([x['batch_val_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['batch_val_acc'] for x in outputs]).mean()

    return {
      'val_loss': avg_loss,
      'val_acc': avg_acc, 
      'progress_bar':{'val_loss': avg_loss, 'val_acc': avg_acc }}`

I managed to achieve using callbacks and loggers but it doesn't work for ddp backend if doing distributed training. Think I need to include a manual gather step in there for that

let鈥檚 formally support this somehow so it doesn鈥檛 have to be hacker around :)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

edenlightning picture edenlightning  路  3Comments

anthonytec2 picture anthonytec2  路  3Comments

chuong98 picture chuong98  路  3Comments

versatran01 picture versatran01  路  3Comments

DavidRuhe picture DavidRuhe  路  3Comments