"Saving latest checkpoint..." warning appears regardless of whether a ModelCheckpoint exists or save_last is set to True
This might confuse an user to think the last checkpoint got saved when it did not.
def check_checkpoint_callback(self, should_check_val, force_save=False):
model = self.trainer.get_model()
# when no val loop is present or fast-dev-run still need to call checkpoints
# TODO bake this logic into the checkpoint callback
should_activate = not is_overridden('validation_step', model) and not should_check_val
if should_activate or force_save:
checkpoint_callbacks = [c for c in self.trainer.callbacks if isinstance(c, ModelCheckpoint)]
if any(c.save_last for c in checkpoint_callbacks):
rank_zero_warn('Saving latest checkpoint..')
[c.on_validation_end(self.trainer, model) for c in checkpoint_callbacks]
why not just remove the log line from training_loop and defer logging about saving the latest checkpoint to be within the checkpoint callback? that seems simpler to me
Because the logic to save last is inside of on_validation_end so it would appear after the first validation run
save_last is set to True
it was meant to save the checkpoint if someone interrupts the training.
regardless of whether a ModelCheckpoint exists
yea, should not log if no ModelCheckpoint is used.
Thanks for the issue @carmocca . Mind sending a PR?
Done!