While this is a nice feature, it shouldn't block 3.0. I'm entirely removing the milestone because this sounds like a nice-to-have to me.
Good feature)) but how to count asserts? by parsing tests files? but in this case you can't be sure that all asserts was called
Probably the only way would be parsing, but TBH I don't see why showing the number of asserts would be useful...
@nicoddemus
btw, we can create plugin for pytest like "pytest asserts" and create pytest.assert() function. In this case we can count asserts and add some features like "clever" assertions
What would be the advantage over pytest's assertion rewriting?
You can create functions for assertions diffirent types or behaviours, like assert objects A() == B()
And users can write custom matchers in easy mode
And also you can get number of asserts in the end of tests =)
You can create functions for assertions diffirent types or behaviours, like assert objects A() == B()
And users can write custom matchers in easy mode
You can already do that by implementing the pytest_assertrepr_compare hook. 馃槈
And also you can get number of asserts in the end of tests =)
Which TBH doesn't seem so useful specially if you have to give up using plain asserts, which is one of pytest's killer features. 馃槈
@myoung8 Can you maybe elaborate why this'd be useful for you?
number of assertions is a quality indicator very comparable to lines of code - it says next to nothing about the quality of a test
@RonnyPfannschmidt I think the point is probably that the number of assertions is more indicative of test quality than the number of tests, meaning 50 tests with a single assertion each is probably a lower quality test suite than 25 tests with 4-5 assertions each.
Separating out assertions into more tests is probably a better strategy most of the time anyways, but sometimes the expense of creating text fixtures make tests with many assertions much more performant (and easier to maintain).
@RonnyPfannschmidt I think the point is probably that the number of assertions is more indicative of test quality than the number of tests, meaning 50 tests with a single assertion each is probably a lower quality test suite than 25 tests with 4-5 assertions each.
I'm with @RonnyPfannschmidt on this one, I don't think this is a good indicator of test suite quality (if at all).
Out of curiosity, how is it worse of an indicator than just the base number of tests? That latter doesn't really indicate anything about test quality either.
I don't think anybody is saying that one is better than the other, just that they are both useless as indicators go, that's all. 馃槈
@nicoddemus
You can create functions for assertions diffirent types or behaviours, like assert objects A() == B()
And users can write custom matchers in easy modeYou can already do that by implementing the
pytest_assertrepr_comparehook.
Updated link: https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_assertrepr_compare
At least according to the docs it gets only called for failing assertions though?!
I think counting assertions might work via the assertion rewriting itself, i.e. count the number of invocations from there, but I agree that it is not that useful (and comes with costs).
I suggest closing this as won't fix.
I think this feature is quite useful but not for test quality. It's an additional check for me that my test did what I think it did. While I always make sure a test is a good failing test when I first write it, it's not always practical to go back and make sure every assertion is being run.
It's possible to write a buggy test that actually skips your assertions and an assertions ran count would be a useful cross-check.
FWIW we now have the pytest_assertion_pass so it should be possible to count both passing and failed assertions in a plugin; this count might be used to print a summary at the end of the test session for example.
Most helpful comment
number of assertions is a quality indicator very comparable to lines of code - it says next to nothing about the quality of a test