pytest.xfail and pytest.skip when used in a fixture results in xfail and skip respectively but pytest.fail results in error and not failure.
Though there may be reasons as to why it should result in error rather than failure and though there could be alternatives to this, "pytest.fail" resulting in error looks like a bug.
When user uses pytest.fail explicitly in a fixture, he knows what he is doing and he expects it to fail rather than pytest automatically making it failure.
py37 installed: atomicwrites==1.3.0,attrs==19.1.0,colorama==0.4.1,more-itertools==7.0.0,pluggy==0.9.0,py==1.8.0,pytest==4.4.0,six==1.12.0
can you elaborate the use-case in which you know exactly what is wrong
pytest in general consider any failure in the setup instead of the execution of a test a error instead of a failure (its considered a harsh issue when the creation of test conditions fails instead of the test)
also please note, that if you know exactly why it breaks and its within a certain expectation, a pytest.xfail may communicate that exactly and in all cases
also please note, that if you know exactly why it breaks and its within a certain expectation, a pytest.xfail may communicate that exactly and in all cases
I totally agree with you here, as I already mentioned there could be alternatives to solve this issue
pytest in general consider any failure in the setup instead of the execution of a test a error instead of a failure (its considered a harsh issue when the creation of test conditions fails instead of the test)
Pytest generally considering any failure in the setup instead of the execution of a test an error looks good design to me as well but not when the user explicitly mentions pytest.fail in the fixture.
Should not we give the user freedom to make it a failure rather than considering it an error by default ? 'pytest.fail' is the only way where we can give user this freedom.
As I said, instead of pytest defining all fixtures failures should be considered as errors, why do not we simply let the user give a chance to decide on it through 'pytest.fail'?
I cannot come up with any valid use case as of now as they can be solved through alternative ways like xfail etc. But I believe that should not suppress this bug.
I agree with @cvasanth2707 here - using pytest.fail is pretty explicit.
My use case is checking prerequisites in the fixture (e.g. that a required
service running), and I would rather see the test failing than it being
considered as an error, since this usually indicates something
unexpected/abnormal/unhandled.
So in this case xfail does not really make sense, since I should rather
start/check the service.
Minimal example:
import pytest
@pytest.fixture
def fix():
pytest.fail("should cause FAILURE, not ERROR")
def test_fail_in_fixture(fix):
pass
I think I remember some conversations about this. If I understand right this proposes that when pytest.fail.Exception is raised in either the setup or teardown that it is considered a test failure and not an error. The problem with that is that the pytest.fail() call now inhibits other teardown behaviour, so you need to be really deliberate about how your invoke pytest.fail() from your teardown.
The solution to that is to introduce another explicit step in the runtest protocol. But this runs into backwards compatibility problems because you can bet some plugins or test suites will break. So you could make it an sub-step of the call or teardown steps in the runtest protocol I guess.
Finally if it's a new step, there's also the question of adding a request.addverifier() (or a better name) which would probably make sense.
At this point I think past discussions of this got stuck in decision paralysis...
Maybe it helps if I give an actual use case example that drove me and my colleagues to use pytest.fail from fixtures.
A specific use case that I'm not sure how to fix (yet ;)) is to verify that, when using Selenium, the system under test doesn't issue errors or other unwanted content to the browser's console log.
assert web_driver.bad_console_log_entries == EMPTY_BROWSER_LOG everywhere and have a tryfirst pytest_assertrepr_compare hook for the EMPTY_BROWSER_LOG sentinel. Still not ideal, though it gives us the ability to "checkpoint" the console log if we want to.pytest.fail the teardown.Judging a book by its cover, request.addverifier sounds exactly like what we're looking for. It's not that the tests broke the fixture (which would be an error indeed), but that the fixture shows traces of improper use.
Hi @stiiin,
For your case, perhaps writing a hookwrapper around pytest_runtest_call might fit the bill for you?
Untested:
@pytest.hooimpl(hookwrapper=True)
def pytest_runtest_call():
if check_logs_for_problems():
pytest.fail("log problems found")
yield
if check_logs_for_problems():
pytest.fail("log problems found")
Most helpful comment
I agree with @cvasanth2707 here - using
pytest.failis pretty explicit.My use case is checking prerequisites in the fixture (e.g. that a required
service running), and I would rather see the test failing than it being
considered as an error, since this usually indicates something
unexpected/abnormal/unhandled.
So in this case
xfaildoes not really make sense, since I should ratherstart/check the service.
Minimal example: