I run the first sample code from README.md and change this
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
to this
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_end(self, outputs):
tensorboard_logs = {'val_loss': 5}
return { 'log': tensorboard_logs}
I notice that their are no log of 'val_loss' on tensorboard (I expected it should be constant 5).
And I change code to this
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
return {'val_loss': 1}
def validation_end(self, outputs):
tensorboard_logs = {'val_loss': 5}
return { 'log': tensorboard_logs}
The val_loss is show on tensorboard a constant 5.
My question
The validation_end is work only if I use the validation_step?
Yes, validation_end only runs if validation_step is defined. If you're just looking to run some code at the end of each epoch, use the on_epoch_end hook.
Thank, I am looking to put some log to tensorboard at the end of epoch without using validation_step. Is there other method than validation_end and training_step that can log to tensorboard?
def on_epoch_end(self):
tensorboard_logs = {some dict}
return { 'log': tensorboard_logs}
This doesn't seem to work,
There's a built-in Tensorboard logger in master if that fits your needs.
Ah, I misunderstood. It doesn't make sense to return validation logs if you're not actually doing validation. If you're trying to log training statistics, returning from training_step is the correct thing to do.
Otherwise, you might need to manually call the logger in on_epoch_end.
self.logger.log_metrics(tensorboard_logs)
self.logger.log_metrics(tensorboard_logs)
This work, thank!
Most helpful comment
Ah, I misunderstood. It doesn't make sense to return validation logs if you're not actually doing validation. If you're trying to log training statistics, returning from
training_stepis the correct thing to do.Otherwise, you might need to manually call the logger in
on_epoch_end.