If I have a trained model and I want to test it using Trainer.test(), how do I get the actual predictions of the model on the test set?
I tried to log the predictions and writing a Callback to get the logs at test end, but it seems like I can only log scalar Tensors in the dictionary returned by my model's test_end().
Hi! thanks for your contribution!, great first issue!
Do need also to measure model performances? if not you can just get predictions, see
https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#predicting
It would be nice to not have to call the model separately since then I would have to write my own testing loop on the same test set which kinda defeats the purpose of Pytorch lightning
@KeAWang
If I understand correctly, you want to log ALL predictions on your test set. This should be possible:
test_dataloader, test_step and test_end test_step you can't return a log dict, that's normal. But you can call self.logger.experiment.add_image or whatever method you want to use to log your predictions (depends on the logger class). Is this what you need?
Right but I want to be able to get the predictions as a local variable that I can continue to process in my jupyter notebook. I'm not sure how to extract it from a tensorboard logger. Would I have to write my own logger?
That's not the same as logging, that was not clear in your original question.
You will have to collect your predictions in test_step in a variable like self.predictions which is a list or something. Then after you call trainer.test() you can access model.predictions in your notebook.
What do you think?
I was wondering if there was something that used the framework directly. But that would work. Thank you!
Most helpful comment
That's not the same as logging, that was not clear in your original question.
You will have to collect your predictions in test_step in a variable like
self.predictionswhich is a list or something. Then after you calltrainer.test()you can accessmodel.predictionsin your notebook.What do you think?