2nd Follow-up of #2710.
As we discussed there, we use Unittest framework and not pytest though tests run via pytest.
We have some places in files test_onnx.py
and test_ops.py (my bad tests as well).
Most were cleaned previously #1483
I guess I can clean then up, should I make a single PR or separate them one file at a time?
Also do let me know if there are additional places.
Also probably add this Uninttest framework tests a part of contributing.md file #2651 #2663 . It would probably avoid this pytest vs Unittest confusion.
Hi @oke-aditya
I think we don't have many more asserts left in the testing code, so a single PR should be fine (as long as there are not 100s of occurrences, if that's the case it's better to split in multiple PRs).
We currently work on both pytest and unittest, and that has traditionally been the case (except for maybe latest tests from @pmeier in https://github.com/pytorch/vision/blob/master/test/test_datasets_download.py ), and I would propose we keep it this way for now.
Good idea about the CONTRIBUTING.md mentioning the testing decision. Once we finish this release (which is coming very soon) we will get back to the CONTRIBUTING.md file.
The only thing self.assertEqual() and all the others methods do, is improving the error message of a plain assert when run from unittest. Since we are not doing that and instead run the test with pytest, they might even be detrimental: if pytest detects an AssertionError without custom message it provides a detailed introspection why the assert failed. Since the unittest methods use a custom message, the introspection is deactivated.
IMO, if we "force" a specific style, it should be pytest over unittest, since the former seems to become the de facto standard testing framework.
@pmeier in torchvision we will follow whatever PyTorch uses, and PyTorch currently uses self.assertEqual and such.
But your arguments are valid, and would be worth discussing more broadly within the other domain libraries such as torchtext / torchaudio.
cc @cpuhrsch @mthrok @vincentqb @zhangguanheng66 for thoughts
For torchtext, we use self.assertEqual, which is from torch.testing._internal.common_utils as pointed out above.
@cpuhrsch once told me that PyTorch's asserEqual does more than giving a better error message, but also check other stuff (contiguous?) and there was some release related issue that PyTorch'sasserEqual could have caught on PyTorch core. I forgot the detail but that convinced me to use PyTorch's assertEqual.
Since this discussion "escalated" beyond torchvision, some clarifications: within our test suite we are not using torch.testing._internal.common_utils.TestCase but rather plain unittest.TestCase. While self.assertEqual() of the former indeed has additional functionality for torch.Tensors, the latter only provides better error messages.
For torchtext, we use
self.assertEqual, which is fromtorch.testing._internal.common_utilsas pointed out above.
(same for torchaudio)
I think we would like to use torch.testing._internal.common_utils.TestCase in torchvision. This was not exposed in PyTorch lib the past, and thus we had to copy our own variants in common_utils.py, which has a few copy-paste from common_utils.py from PyTorch.
We have been manually implementing tensor comparison across our transforms via (tensor1 - tensor2).abs().max() < TOL, it would have been much better to just use PyTorch's implementation of self.assertEqual.
Most helpful comment
I think we would like to use
torch.testing._internal.common_utils.TestCasein torchvision. This was not exposed in PyTorch lib the past, and thus we had to copy our own variants incommon_utils.py, which has a few copy-paste fromcommon_utils.pyfrom PyTorch.We have been manually implementing tensor comparison across our transforms via
(tensor1 - tensor2).abs().max() < TOL, it would have been much better to just use PyTorch's implementation ofself.assertEqual.