Tests should run without hypothesis being installed, i.e. those tests should be skipped.
Currently it fails like this:
platform linux -- Python 3.8.0a2+, pytest-4.3.2.dev103+g15d60886.d20190322, py-1.8.0, pluggy-0.9.0
rootdir: …/Vcs/pytest, inifile: tox.ini
collected 2278 items / 1 errors / 2277 selected
===== ERRORS =====
_____ ERROR collecting testing/python/metafunc.py _____
ImportError while importing test module '…/Vcs/pytest/testing/python/metafunc.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
testing/python/metafunc.py:7: in <module>
import hypothesis
E ModuleNotFoundError: No module named 'hypothesis'
===== short test summary info =====
FAILED testing/python/metafunc.py
!!!!! Interrupted: 1 errors during collection !!!!!
===== 1 error in 4.31 seconds =====
This is an old issue, since https://github.com/pytest-dev/pytest/pull/1470 I guess.
/cc @ceridwen
I think it does not make sense to have hypothesis just for this test, and given that it did not find anything new since 2016 (although state is not saved on CI, should it maybe (if we would keep it)?), I'd rather remove it than conditionally skipping this test.
/cc @The-Compiler via https://github.com/pytest-dev/pytest/pull/1470#issuecomment-203240279
I guess it would be good to use hypothesis in more places - what's the problem with requiring hypothesis for pytest's testsuite?
What is the benefit?
Isn't it that it would become useful only when being used for longer? (i.e. it does not make sense on CI without a cache, for example)
I guess it would be good to use hypothesis in more places
Yes - I think the current single use does not warrant it really; but I have not much experience with it really.
FWIW I've found many issues in qutebrowser on the CI without caching the database.
I agree with @The-Compiler that we might not be using Hypothesis to its full potential, while I also see @blueyed's point that we have used it only for a single test since 2016.
How about we compromise: let's make a group effort to use Hypothesis more. After some time (say how about right after pytest 5.0 gets released) we reevaluate our usage: if we are still in the same situation, we can discuss dropping it again. How does that sound?
@nicoddemus
Sounds good, except that I would drop it already for now - the patch for this is simple, and can easily be reverted then.
Sounds good, except that I would drop it already for now - the patch for this is simple, and can easily be reverted then.
If we drop it now, we will create a small friction when we need to use it again. I would rather keep it, and follow with the plan of reviewing this after 5.0.
From my part, I don't see too much problem with just pip install hypothesis in a venv if I want to run pytest manually, or use tox.
What is the friction?
Reverting a commit?
Isn't that very similar to pip install hypothesis? :)
(I am using pip install -e '.[testing]' usually, but ran into missing hypothesis often enough to report it)
Well we can argue with it either way to be honest.
I think we should treat this as a "wake up call" for us in order to use more hypothesis. Starting it by removing the dependency doesn't seem the right way to approach this.
Ok, closing this issue then - we should have a new one about using hypothesis more then.
FWIW: it is causing the py38-dev build to fail: https://travis-ci.org/pytest-dev/pytest/jobs/532103482#L288
FWIW: it is causing the py38-dev build to fail: https://travis-ci.org/pytest-dev/pytest/jobs/532103482#L288
Hmm, according to https://github.com/HypothesisWorks/hypothesis/issues/1959 this should be fixed with a more current Python 3.8 Alpha - maybe Travis CI is still shipping an older one or so?
Great, thanks for the info!
Causes py38-dev to fail again: https://travis-ci.org/pytest-dev/pytest/jobs/538556026
Looks similar to the previous issue?!
follow with the plan of reviewing this after 5.0
Re-opening then for now.
For reference: current usage appears to be only in https://github.com/pytest-dev/pytest/blob/d39f95609b28d63f3f2eed98d9accb074616ed08/testing/python/metafunc.py#L194-L203
Causing flaky build failures recently:
=================================== FAILURES ===================================
______________________ TestMetafunc.test_idval_hypothesis ______________________
[gw0] darwin -- Python 3.7.0 /Users/travis/build/pytest-dev/pytest/.tox/py37-xdist/bin/python
self = <metafunc.TestMetafunc object at 0x1066d0400>
@hypothesis.given(strategies.text() | strategies.binary())
> @hypothesis.settings(
deadline=400.0
) # very close to std deadline and CI boxes are not reliable in CPU power
def test_idval_hypothesis(self, value):
E hypothesis.errors.FailedHealthCheck: Data generation is extremely slow: Only produced 7 valid examples in 1.12 seconds (0 invalid ones and 2 exceeded maximum size). Try decreasing size of the data you're generating (with e.g.max_size or max_leaves parameters).
E See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.too_slow to the suppress_health_check settings for this test.
testing/python/metafunc.py:195: FailedHealthCheck
---------------------------------- Hypothesis ----------------------------------
You can add @seed(35364130883208350154606057602597663902) to this test or run pytest with --hypothesis-seed=35364130883208350154606057602597663902 to reproduce this failure.
=============================== warnings summary ===============================
JFI: flaky failures due to hypothesis are confusing new contributors: https://github.com/pytest-dev/pytest/pull/6107#issuecomment-548760800
@Zac-HD any hints on avoiding the flakyness issue on Hypothesis that @blueyed is mentioning above?
We might want to revisit this. While I like Hypothesis and use it in many other projects, it might not bring much to the table in pytest's case, or at least contributors don't seem to use it and core contributors (myself included of course) fail to suggest valid cases where Hypothesis could be used in new tests.
I can't see what was flaky, so it's hard to give specific advice! The derandomize setting, or suppress_health_check=[HealthCheck.too_slow] might be useful though?
Otherwise, if the project isn't finding Hypothesis tests useful and nobody wants to write or maintain them, it might just not be a good fit at the moment. Nothing wrong with that (though we'd love to hear how we could help). If you do want to write more property-based tests, what's stopping you?
@Zac-HD
The failure is https://github.com/pytest-dev/pytest/issues/4977#issuecomment-547411334
Yeah, for timing flakiness I just use the suppress_health_check=[HealthCheck.too_slow] setting. Not super elegant, but it has the substantial advantage of working. https://github.com/HypothesisWorks/hypothesis/issues/2108 might help too.
If it would resolve your problems with Hypothesis I'd be happy to write a PR sorting out the configuration?
If it would resolve your problems with Hypothesis I'd be happy to write a PR sorting out the configuration?
Yeah, please go ahead.
FWIW I'm still in the camp of not having it for just one test.
But we should at least make it not cause confusion with new contributors / flaky CI failures.