Following the new callback system here are a few things we might want to do proposed by people who participated in #889 and #896.
[ ] Consolidate model hooks and callbacks system. Make the calls at the same moment: https://github.com/PyTorchLightning/pytorch-lightning/pull/889#issuecomment-590634896
[ ] Fix on_train_begin() being called in the early stopping callbacks: https://github.com/PyTorchLightning/pytorch-lightning/pull/889#discussion_r383052267
[ ] Assert pl_module and trainer are excepted object instances in the callback system train tests: https://github.com/PyTorchLightning/pytorch-lightning/pull/889#discussion_r383051932
[ ] Incorporate existing callbacks into the new callback system.
[ ] Edit the changelog
ping @Borda @williamFalcon @jeremyjordan @awaelchli
just waiting for #889 to be merged and I'll pick up the work here
merged!
@hadim do you have a direct use case for on_fit_start versus on_train_start? we actually don't have access to the model at on_fit_start and i'm wondering if we can just simplify the callbacks by removing on_fit_start and on_fit_end events. i don't think anything important happens between on_fit_start and on_train_start that we would need both...
also, i appreciate the desire to keep things symmetric but i'd like to remove the pl_module arg from on_init_start and on_init_end since we'd be asking the user to include an arg which they wouldn't be able to access (we pass None as the value internally)
I agree.
I thought on_train_start/on_train_end were the equivalent of on_validation_start/on_validation_end but it actually encompass it so it makes on_fit_start and on_fit_end kind of useless in that case.
As for pl_module, sure it makes sense if you are 100% sure that the model will never be available at that moment even in future PL releases or in a complex PL module. I prefer to have a useless argument that will prevent breaking backward compatibility in the future than the opposite. I am not against removing it but I am fine keeping it too.
I am fairly confident given that we're talking about the init method of the Trainer and not of the actual PL module. I will go ahead and make the change, thanks.
Most helpful comment
merged!