Pytorch-lightning: Improve callback system

Created on 26 Feb 2020  路  6Comments  路  Source: PyTorchLightning/pytorch-lightning

Following the new callback system here are a few things we might want to do proposed by people who participated in #889 and #896.

ping @Borda @williamFalcon @jeremyjordan @awaelchli

enhancement help wanted

Most helpful comment

merged!

All 6 comments

just waiting for #889 to be merged and I'll pick up the work here

merged!

@hadim do you have a direct use case for on_fit_start versus on_train_start? we actually don't have access to the model at on_fit_start and i'm wondering if we can just simplify the callbacks by removing on_fit_start and on_fit_end events. i don't think anything important happens between on_fit_start and on_train_start that we would need both...

also, i appreciate the desire to keep things symmetric but i'd like to remove the pl_module arg from on_init_start and on_init_end since we'd be asking the user to include an arg which they wouldn't be able to access (we pass None as the value internally)

I agree.

I thought on_train_start/on_train_end were the equivalent of on_validation_start/on_validation_end but it actually encompass it so it makes on_fit_start and on_fit_end kind of useless in that case.

As for pl_module, sure it makes sense if you are 100% sure that the model will never be available at that moment even in future PL releases or in a complex PL module. I prefer to have a useless argument that will prevent breaking backward compatibility in the future than the opposite. I am not against removing it but I am fine keeping it too.

I am fairly confident given that we're talking about the init method of the Trainer and not of the actual PL module. I will go ahead and make the change, thanks.

Was this page helpful?
0 / 5 - 0 ratings