correct actual EvalModelTemplate to use new API unless it is testing other purposes or deprecated API
better testing of the actual API
we need to deprecate evalmodeltemplate.
Hi @williamFalcon, I would like to start contributing to pytorch-lightning. Is this still a first good issue?
Update:
This line is causing three tests in tests/backend/test_ddp.py to fail.
from tests.base import EvalModelTemplate in tests/backends/model_dpd.py
The error is \nModuleNotFoundError: No module named \'tests.base\'\n'. I think it could be related to this issue
I fixed the issue related to my previous comment. I proceed with a PR, and start a new branch for implementing an EvalModel
welcome to PL! excited to help you contribute.
Please do the deprecation only a few files at a time and make sure to ping @SeanNaren and @tchaton for guidance and advice!
Hi @SeanNaren and @tchaton
I'm working on deprecating EvalModelTemplate and adopting BoringModel throught the test suite. And I have a couple of questions. I wrote a dispatcher function in develop_pipelines to call the right run_prediction if the model is of class BoringModel. Actually, BoringModel is just a perceptron with a hidden layer and build a build a random dataset. Do we need a more "advanced" test model?
At the moment I've refactored test_gpu and I'm working on test_cpu. I open a draft pull request to share my progress and my solution at #4623
Let me know if it sounds good to you and if you have any suggestions!
[[> Hi @SeanNaren and @tchaton
I'm working on deprecating EvalModelTemplate and adopting BoringModel throught the test suite. And I have a couple of questions. I wrote a dispatcher function in
develop_pipelinesto call the rightrun_predictionif the model is of classBoringModel. Actually, BoringModel is just a perceptron with a hidden layer and build a build a random dataset. Do we need a more "advanced" test model?
At the moment I've refactoredtest_gpuand I'm working ontest_cpu. I open a draft pull request to share my progress and my solution at #4623
Let me know if it sounds good to you and if you have any suggestions!
Hi @SeanNaren @tchaton @williamFalcon , I have an additional question: in test_cpu.py, are test_tbtt_cpu_model_result and test_tbptt_cpu_model_result_auto_reduce deprecated? They both use TrainResult, which should be deprecated by version 1.0
Hey @gianscarpe apologies for the radio silence, got lost in my notifications. Had a read over your draft PR, awesome stuff :)
Regarding the dispatcher function I think it's fine. The BoringModel is the baseline, all other models should override within the tests :)
Take it one PR at a time (maybe one file/feature at a time?) and it will be a smoother experience! It may take time but will get done eventually.
Regarding the tbptt tests I'm not sure as this is a specific functionality required for TBPTT (passing previous hiddens), @tchaton recently looked over the API so may be able to comment!
Hey @gianscarpe apologies for the radio silence, got lost in my notifications. Had a read over your draft PR, awesome stuff :)
Regarding the dispatcher function I think it's fine. The BoringModel is the baseline, all other models should override within the tests :)
Take it one PR at a time (maybe one file/feature at a time?) and it will be a smoother experience! It may take time but will get done eventually.
Regarding the tbptt tests I'm not sure as this is a specific functionality required for TBPTT (passing previous hiddens), @tchaton recently looked over the API so may be able to comment!
Thanks for the reply! I followed your advice and opened a dedicate PR for test_gpu and test_cpu only #4820
Most helpful comment
welcome to PL! excited to help you contribute.
Please do the deprecation only a few files at a time and make sure to ping @SeanNaren and @tchaton for guidance and advice!