Pytorch-lightning: Early stopping + checkpoint key

Created on 26 Apr 2020  Â·  3Comments  Â·  Source: PyTorchLightning/pytorch-lightning

Consider updating how we condition early stopping or checkpoint

return {'early_stop_on': mse_loss, 'checkpoint_on': other_metric}

Instead of:

# only if val_loss is present
return {'val_loss': val_loss}
enhancement help wanted won't fix

Most helpful comment

in my opinion all of the configuration for early stopping or model checkpointing should occur in the initialization of the callback object. if we condition based on a special key in the step return, users have to change their model to modify the behavior of a callback.

All 3 comments

in my opinion all of the configuration for early stopping or model checkpointing should occur in the initialization of the callback object. if we condition based on a special key in the step return, users have to change their model to modify the behavior of a callback.

as a shortcut, can the trainer flags already support the key?

early_stop_callback=‘val_loss’

for example

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

monney picture monney  Â·  3Comments

baeseongsu picture baeseongsu  Â·  3Comments

anthonytec2 picture anthonytec2  Â·  3Comments

chuong98 picture chuong98  Â·  3Comments

srush picture srush  Â·  3Comments