Pytorch-lightning: The validation_end is work only if I use the validation_step?

Created on 6 Jan 2020  路  5Comments  路  Source: PyTorchLightning/pytorch-lightning

I run the first sample code from README.md and change this

def training_step(self, batch, batch_idx):
        # REQUIRED
        x, y = batch
        y_hat = self.forward(x)
        loss = F.cross_entropy(y_hat, y)
        tensorboard_logs = {'train_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

def validation_step(self, batch, batch_idx):
        # OPTIONAL
        x, y = batch
        y_hat = self.forward(x)
        return {'val_loss': F.cross_entropy(y_hat, y)}

def validation_end(self, outputs):
        # OPTIONAL
        avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
        tensorboard_logs = {'val_loss': avg_loss}
        return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}

to this


def training_step(self, batch, batch_idx):
        # REQUIRED
        x, y = batch
        y_hat = self.forward(x)
        loss = F.cross_entropy(y_hat, y)
        tensorboard_logs = {'train_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

def validation_end(self, outputs):

        tensorboard_logs = {'val_loss': 5}
        return { 'log': tensorboard_logs}

I notice that their are no log of 'val_loss' on tensorboard (I expected it should be constant 5).
And I change code to this

def training_step(self, batch, batch_idx):
        # REQUIRED
        x, y = batch
        y_hat = self.forward(x)
        loss = F.cross_entropy(y_hat, y)
        tensorboard_logs = {'train_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

def validation_step(self, batch, batch_idx):
        return {'val_loss': 1}

def validation_end(self, outputs):
        tensorboard_logs = {'val_loss': 5}
        return { 'log': tensorboard_logs}

The val_loss is show on tensorboard a constant 5.

My question
The validation_end is work only if I use the validation_step?

question

Most helpful comment

Ah, I misunderstood. It doesn't make sense to return validation logs if you're not actually doing validation. If you're trying to log training statistics, returning from training_step is the correct thing to do.

Otherwise, you might need to manually call the logger in on_epoch_end.

self.logger.log_metrics(tensorboard_logs)

All 5 comments

Yes, validation_end only runs if validation_step is defined. If you're just looking to run some code at the end of each epoch, use the on_epoch_end hook.

Thank, I am looking to put some log to tensorboard at the end of epoch without using validation_step. Is there other method than validation_end and training_step that can log to tensorboard?

def on_epoch_end(self):
        tensorboard_logs = {some dict}
        return { 'log': tensorboard_logs}

This doesn't seem to work,

There's a built-in Tensorboard logger in master if that fits your needs.

Ah, I misunderstood. It doesn't make sense to return validation logs if you're not actually doing validation. If you're trying to log training statistics, returning from training_step is the correct thing to do.

Otherwise, you might need to manually call the logger in on_epoch_end.

self.logger.log_metrics(tensorboard_logs)

self.logger.log_metrics(tensorboard_logs)
This work, thank!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

anthonytec2 picture anthonytec2  路  3Comments

edenlightning picture edenlightning  路  3Comments

williamFalcon picture williamFalcon  路  3Comments

srush picture srush  路  3Comments

baeseongsu picture baeseongsu  路  3Comments