Is there any means that allow me to make pytest fail if any tests are skipped (we use test skipping mainly for not installed dependencies). I couldn't find anything when googling.
there is nothing builtin, you could repalce improt_or_skip with something own
Could you explain what you mean by replacing import_or_skip ?
I cannot find that anywhere in pytest.
I tried something like this in my conftest:
def pytest_runtest_call(item):
if not isinstance(item, _pytest.doctest.DoctestTextfile):
evalskip = getattr(item, '_evalskip', None)
if evalskip is not None and evalskip.istrue():
item.addFailure(None, "Test was skipped")
Where I was expecting to check if the test was skipped, and add a failure if it really was skipped - but it didn't really work (No idea why)
My understanding of the pytest framework is limited - any help would be appreciated
import_or_skip('module') will skip the test if module could not be imported... that's what @RonnyPfannschmidt meant.
If you want the test to fail if a dependency is not installed then you shouldn't be using skip IMHO. Skip is meant to be used when it is OK for a test to be skipped due to expected constrains (such a windows test running on linux). If you want the test to fail, simply use pytest.fail instead of pytest.skip.
FWIW you could probably change the test outcome after the fact, similar to what I do in pytest-vw. Not that I'd recommend it (see what @nicoddemus said), but I'm guilty by writing that plugin anyways :laughing:
@nicoddemus I seem to need this again, so I'm restarting this discussion.
The reason I want to fail tests I marked as skipped is because of CI.
In my CI, I want to ensure that none of my tests skipped. Because that means I didn't set up something up correctly. A while back, Travis had a bug https://github.com/travis-ci/travis-ci/issues/5405 and because of that none of the apt packages were installed in some of our jobs :/
pytest was quietly skipping them and we didn't even know because we were happy with the green builds.
I understand that this can be avoided by checking all the dependencies before running pytest ... but I think doing this in pytest would be much easier.
So, ideally I would like to mention which tests (by name, or maybe how many tests) I wouldn't mind being skipped in my travis - would that be possible with a conftest ?
would that be possible with a conftest ?
Probably yes. How are you marking tests that should be skipped locally if a dependency is missing, but should fail on CI in the same situation?
For illustration, I would adopt an explicit mark for that purpose and handle that in a conftest.py file:
# test file
@pytest.mark.check_dep('pillow')
def test_image_blur():
...
# conftest.py
@pytest.fixture(autouse=True)
def handle_check_dep_markers(request):
m = request.item.get_marker('check_dep')
if m:
module_name = m.args[0]
try:
__import__(module_name)
available = True
except ImportError:
available = False
running_on_ci = 'JENKINS_URL' in os.environ
if not available:
message = 'Missing required module: %s' % module_name
if running_on_ci:
pytest.fail(message)
else:
pytest.skip(message)
(Note: untested, just giving the general idea)
Such mark would skip the test locally, but fail when running in the CI.
@nicoddemus Thanks for that ! Sadly, it may not suit my needs.
We normally use @unittest.skipIf, @unittest.skipUnless so that it's compatible with unittest and nose also (Some devs like nose, others prefer unittest ...).
But other than that, I can tweak your example code to also read a list of acceptable failed tests (from an env variable or a file) and fail as appropriate. So, I'm impressed by the flexibility that would provide :+1:
I think I should be able to override the unittest decorators to become a pytest fixture if the tests are being run by pytest (Im guessing there would be some way to detect this in the code), but it's a little hacky.
Would it be possible to do a similar thing with the unittest.skipIf stuff ? Is there a hook for that that possibly I could use/create ?
Oh OK that brings more light to the subject, thanks!
I'm pretty sure unittest.skipIf decorates the function with some attribute that you can inspect in the fixture declared in the conftest.py by looking at request.item.obj (which will be a method if I'm correct).
Other than that pytest doesn't really know about unittest.skipIf decorator.
coala (org which needed this) has found a hackish way to achieve this, by converting skips to errors with https://pypi.python.org/pypi/pytest-error-for-skips , and also by reach 100% coverage and enforcing it with pytest-cov, which is another way to indirectly catch skips as they usually result in code not being reached.
@jayvdb thanks for sharing that! 馃憤
Most helpful comment
@nicoddemus I seem to need this again, so I'm restarting this discussion.
The reason I want to fail tests I marked as skipped is because of CI.
In my CI, I want to ensure that none of my tests skipped. Because that means I didn't set up something up correctly. A while back, Travis had a bug https://github.com/travis-ci/travis-ci/issues/5405 and because of that none of the
aptpackages were installed in some of our jobs :/pytest was quietly skipping them and we didn't even know because we were happy with the green builds.
I understand that this can be avoided by checking all the dependencies before running pytest ... but I think doing this in pytest would be much easier.
So, ideally I would like to mention which tests (by name, or maybe how many tests) I wouldn't mind being skipped in my travis - would that be possible with a conftest ?