Most optimisation packages ie Ray Tune / Hyperopt return the train loop to return a final accuracy for the optimiser to decide what to try next.
How do I do this with the Trainer module for Pytorch Lightning?
I too have the same issue , I want to return "val_loss" in validation step and "avg_val_loss" from validation_epoch_end
https://github.com/PyTorchLightning/pytorch-lightning/issues/321
`def validation_end(self, outputs):
avg_loss = torch.stack([x['batch_val_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['batch_val_acc'] for x in outputs]).mean()
return {
'val_loss': avg_loss,
'val_acc': avg_acc,
'progress_bar':{'val_loss': avg_loss, 'val_acc': avg_acc }}`
I managed to achieve using callbacks and loggers but it doesn't work for ddp backend if doing distributed training. Think I need to include a manual gather step in there for that
let鈥檚 formally support this somehow so it doesn鈥檛 have to be hacker around :)
Most helpful comment
let鈥檚 formally support this somehow so it doesn鈥檛 have to be hacker around :)