Pytest: Extend pytest to differentiate between assertion violations and other kind of failures

Created on 15 May 2016  路  15Comments  路  Source: pytest-dev/pytest

In pytest, I want to report all uncaught AssertionErrorexceptions as _Failure_ and all other uncaught exceptions as _Error_ (instead of the default behavior of reporting all uncaught exceptions in test cases and UUT as _Failure_ and all uncaught exceptions raised elsewhere as _Error_). I thought it could be done with pytest hooks. However, _passed_, _skipped_, and _failed_ seem to be the only valid outcome values in TestReport object. So,

  1. Is it possible to add "error" as a valid outcome and let the rest of pytest handle appropriate reporting, i.e., display E/ERROR instead of F/FAILURE on console output?
  2. If so, what would be the ideal part of the source to do this?
  3. If we cannot add "error" as a valid outcome, then what would be the best way to add this behavior to pytest?

Most helpful comment

You can probably write a hookwrapper to not overwrite the original hook and then change the outcome as desired, similar to what I do with pytest-vw here. (Hah, never thought I could use that as an example!)

All 15 comments

I could achieve the desired reporting behavior with two small modifications to the default pytest_runtest_makereport hook implementation (Gist).

However, this solution depends on modules within _pytest module, and I am not sure if such dependency on an internal module is good. So, I am wondering if there is a cleaner way to achieve the same effect by transforming the report object in a hook.

Hi @rvprasad sorry for the late response.

However, this solution depends on modules within _pytest module, and I am not sure if such dependency on an internal module is good.

Technically no, but I'm not sure how or if this would be part of the public API anytime soon, so I would use that and watch out new pytest releases if it breaks anything.

So, I am wondering if there is a cleaner way to achieve the same effect by transforming the report object in a hook.

I don't know, perhaps only by extending it further with other hooks, but I'm not sure if it's simple to come up with a good API.

@rvprasad please show your concrete use-case, this issue is relatively abstract, thus not really actionable

Consider the following test case.

def test_xyz():
    o = Engine()
    o.start()
    while o.getRotations() < 50:
        pass
    assert o.getTemperature() <= 75

This test case checks if the Engines temperature will be below 75F after 50 rotations. Now, there can be two cases.

  1. If the engine rotates 50 times and its temperature is greater than 75, the assertion will fail and pyTest will correctly flag the implementation as failing the test case.
  2. If the engine blows up on the 25th rotation by raising a EngineBlewUpError exception, then the assertion does not fail (as it is never executed) but pyTest will incorrectly flag the implementation as failing the test case.

In the second case, since the implementation never got to a state where the desired property (assertion) could be checked, we cannot determine if the implementation does indeed violate the desired property. This outcome (E) is distinct from the outcome (F) in which the implementation does violate the property. However, pyTest will treat both E and F as violations of the same property. In comparison, nose test runner will distinguish between E and F outcomes.

Ideally, it would be great if we could configure pyTest to flag test cases as failing if the assertions associated with the test cases (and not the UUT) are violated. To do this, we'd need to clearly identify the assertions associated with a test cases, and this can get tricky. An easier (yet imperfect) alternative would be to flag all assertion violations as _test failures (F)_ and all other exceptions as _test anomalies (E)_.

so am i understanding it correctly that you want to differentiate between assertion error and custom/other exceptions in order to determine if code broke directly, or if the results deviate from the expectations

i think such a distinction can be very helpful

i think this can be first tried as an plugin rewriting the report outcomes of failures to failure.assertion and failure.exception respectively to see impact and utility

i suspect such an experiment will lead to some new insights and good ideas idea how to pull this into core later on

would you like to run that experiment or would you prefer the pytest core team takes a look when we get to it (takes longer)

Yes, you are right about the intended distinction.

As mentioned in an earlier comment, I was able to achieve this effect by modifying the default implementation of pytest_runtest_makereport hook function in custom _conftest.py_. Here's the gist. Look for statements marked with # added comment.

However, since the default implementation of pytest_runtest_makereport depends on internal modules (e.g., _pytest) and names (e.g., _code), custom _conftest.py_ will depend on internal modules; clearly, an undesirable situation. Hence, I was wondering if there is a cleaner way to achieve the same effect by transforming the TestReport object in a hook.

So, do you think test report rewriting via a plugin would be the best way to achieve this effect? If so, then I can first try to create a plugin based on the gist. If it works, then fine. If not, then the pytest core team look into it.

That said, can someone from the pytest test take a quick look at the gist and confirm it cannot be trivially integrated into pyTest or cast as a builtin plugin by the core team?

You can probably write a hookwrapper to not overwrite the original hook and then change the outcome as desired, similar to what I do with pytest-vw here. (Hah, never thought I could use that as an example!)

I have created and released a plugin on pypi -- https://pypi.python.org/pypi/pytest-finer-verdicts/1.0 -- and the source repo is available on github -- https://github.com/rvprasad/pytest-finer-verdicts.

BTW, I stumbled on this difference between nose and pytest while I was using PyTest and Hypothesis to introduce property-based testing in an upper-level undergraduate software testing course.

@rvprasad that's very nice! :+1: Please let us know how that works out for you.

Just FYI, did you know about cookiecutter-pytest-plugin? It takes care of a lot of the boilerplate when creating a new plugin. :grin:

Well, the plugin works for my purpose :) So, I plan to use it the next time I teach the course unless there is something new and shiny on the horizon ;)

Great! :grin:

Would you like to close this issue then?

This issues is addressed by pytest-finer-verdicts plugin. So, if users want to differentiate between test failures (assertion violations) and test errors (non-assertion errors), then they should install pytest-finer-verdicts plugin.

Thanks! :+1:

You might want to announce your plugin on [email protected] and [email protected]. :grin:

@rvprasad would you likes some help with regard to automating testing and releasing

Thanks for offering to help. Sure. Also, let me know if I have missed any steps.

Was this page helpful?
0 / 5 - 0 ratings