This is my code:
result = pl.TrainResult(minimize=loss)
result.log('train_loss', loss, prog_bar=True)
tensorboard logger doesn't show train_loss.
EvalResult is normal, it can log to tensorboard by default.
cc @williamFalcon
Hey @xiadingZ, which version of tensorboard are you using? I ran into some issues if I wasn't using the PyPI version of tensorboard==2.2.0.
Also, which version of PTL are you using?
2.2.0. But I tried 2.3.0 and it have same issue, no TrainResult's log
can you post a colab that replicates this?
i suspect you might be doing something weird for config since we test that things are actually logged when expected...
I have manually verified this works, see this colab: https://colab.research.google.com/drive/1aD1sEYNBLHISvnsUhRzyyOLZtMkk-LWs?usp=sharing.
Please send a colab or code so we can see what the issue is. Closing for now.
I have the same issue: https://colab.research.google.com/drive/1s0kUXwXzO9t9Z1MXETohG7pu4dFJ6nej?usp=sharing
I have the same issue with tensorboard==2.2.0, pytorch-lightning==0.9.0.
Edit: OK, found the reason. Trainer's row_log_interval argument is set to 50 by default. If the number of batches is less than row_log_interval, train metrics are never logged.
Most helpful comment
I have the same issue with tensorboard==2.2.0, pytorch-lightning==0.9.0.
Edit: OK, found the reason. Trainer's row_log_interval argument is set to 50 by default. If the number of batches is less than row_log_interval, train metrics are never logged.