Pytorch-lightning: Trainer.test() does not allow to save logs

Created on 17 May 2020  ·  5Comments  ·  Source: PyTorchLightning/pytorch-lightning

🐛 Bug

When we use trainer.test() on a fresh trainer, the call to self.log_metrics(log_metrics, {}, step) fails to produce logs.

To Reproduce

Steps to reproduce the behavior:

Code sample

model = ...  
test_dl = ... 
logger = ...
trainer = pl.Trainer(
   logger=logger
)
trainer.test(model, test_dataloaders=test_dl)

Expected behavior

Metrics are logged.

bug / fix help wanted

Most helpful comment

Btw, I was able to log my logs by adding self.logger.log_metrics(logs) at the end of test_epoch_end but I believe this should be the default.

All 5 comments

Because of the way aggregate_metrics is implemented, logs are only produced when we are leaving the current step. Because there is only one step in testing, the logs are stored in self._metrics_to_agg = [metrics] but never logged.

@ justusschock any ideas?

Btw, I was able to log my logs by adding self.logger.log_metrics(logs) at the end of test_epoch_end but I believe this should be the default.

@Varal7 would you be able to send a PR with the fix?

results = trainer.test()

results is a dict with the results

Was this page helpful?
0 / 5 - 0 ratings

Related issues

srush picture srush  ·  3Comments

chuong98 picture chuong98  ·  3Comments

as754770178 picture as754770178  ·  3Comments

edenlightning picture edenlightning  ·  3Comments

anthonytec2 picture anthonytec2  ·  3Comments