SQLAlchemy CI is now broken due to the unconditional inclusion of pytest-warnings in py.test 3.1.0. I need at least to know how to disable this plugin entirely.
In our test suite we make use of the warnings filter internally in order to propagate our own warnings to errors. It seems like warnings.filter() is now non-functional when this plugin is installed.
Demo test case:
import warnings
class MyWarning(Warning):
pass
warnings.filterwarnings("error", category=MyWarning)
class TestWarnings(object):
def test_my_warning(self):
try:
warnings.warn(MyWarning("warn!"))
assert False
except MyWarning:
assert True
with py.test 3.0.7:
$ py.test test_warning.py
======================================================== test session starts ========================================================
platform linux2 -- Python 2.7.13, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /home/classic/Desktop/tmp, inifile:
plugins: xdist-1.15.0, cov-2.4.0
collected 1 items
test_warning.py .
===================================================== 1 passed in 0.01 seconds ======================================================
with py.test 3.1.0:
$ py.test test_warning.py
======================================================== test session starts ========================================================
platform linux2 -- Python 2.7.13, pytest-3.1.0, py-1.4.33, pluggy-0.4.0
rootdir: /home/classic/Desktop/tmp, inifile:
plugins: xdist-1.15.0, cov-2.4.0
collected 1 items
test_warning.py F
============================================================= FAILURES ==============================================================
___________________________________________________ TestWarnings.test_my_warning ____________________________________________________
self = <test_warning.TestWarnings object at 0x7fecb08a5290>
def test_my_warning(self):
try:
warnings.warn(MyWarning("warn!"))
> assert False
E assert False
test_warning.py:12: AssertionError
========================================================= warnings summary ==========================================================
test_warning.py::TestWarnings::()::test_my_warning
/home/classic/Desktop/tmp/test_warning.py:11: MyWarning: warn!
warnings.warn(MyWarning("warn!"))
-- Docs: http://doc.pytest.org/en/latest/warnings.html
=============================================== 1 failed, 1 warnings in 0.03 seconds ================================================
I have tried everything I can think of with the new -W flag, disable-warnings, no luck.
Please advise on the correct options to allow Python stdlib warnings.filter() to work again, thanks!
how do I get a helper to help triage my bugs and change nasty headlines to nice ones? :(
SQLAlchemy CI is now broken due to the unconditional inclusion of pytest-warnings in py.test 3.1.0. I need at least to know how to disable this plugin entirely.
@zzzeek you can disable the warnings plugin by passing -p no:warnings; this will disable the plugin completely and you should get green builds again. You can choose to add this to your pytest.ini file:
[pytest]
addopts = -p no:warnings
As for the problem itself, I will take a more careful look later.
great, thanks!
i believe https://github.com/pytest-dev/pytest/blob/master/_pytest/warnings.py#L54 is gutting all prior setups
we should just call reset_warning instead when we do something like that and give people a opt-out to allow respecting in code specifications
@zzzeek thanks for posting the example, it helped understand the issue.
Can I ask why you add that warnings filter? If the objective is to turn certain warnings into errors, you can now configure your pytest.ini to the same effect. Raising more awareness about warnings was the motivation for this new feature.
Also, do you think we should add a blurb to the warnings docs making it easy for users to find how to disable it in case they run into trouble?
@RonnyPfannschmidt
i believe https://github.com/pytest-dev/pytest/blob/master/_pytest/warnings.py#L54 is gutting all prior setups
Actually the implementation uses catch_warnings, which saves the previous filters and restores them after the call (see https://github.com/python/cpython/blob/master/Lib/warnings.py#L454). Unfortunately this has the effect of silencing errors configured manually by other warnings.simplefilter calls, which is the heart of this issue. Also, by default Python will install a number of default filters silencing DeprecationWarnings, ImportWarnings and others, which are precisely the kind of warnings we believe should be displayed in test suites.
@RonnyPfannschmidt
we should just call reset_warning instead when we do something like that
resetwarnings() will actually throw out all filters:
(https://docs.python.org/3/library/warnings.html#warnings.resetwarnings)
This discards the effect of all previous calls to filterwarnings(), including that of the -W command line options and calls to simplefilter().
Which is not what we want, I think.
I'm not sure how to proceed here, so I would like to hear some opinions.
@nicoddemus pushing a new once simple filter is invalidating all other filter s due to matching first
so its literally pushing a new filter to be there
i beleive that by pushing a unfiltered once for all warning setups literally breaks all expectations,
people should opt in into breaking filter changes, not have to opt out
in retrospect the warnings addition is implemented in a way that warrants a major release, and the way we should fix it warrants a major release as well because to fix this issue we need to do a breaking change, and we never adopted a process of publishing experiments that we may change our mind on
pushing a new once simple filter is invalidating all other filter s due to matching first
so its literally pushing a new filter to be there
True... it does have an append option though, so perhaps we should change it to append=True to add it to the end of the list. The drawback of this is that the default warnings filter that hide DeprecatedWarning, ImportWarning and etc will be active and hide those warnings by default.
Perhaps we should add two filters, a once filter to the end of the lists and a filter to show some selected warnings (DeprecatedWarning, ImportWarning) to the beginning of the list? After writing this I just realized though: someone might have configured a filter for DeprecatedWarning (for example) to error out, in which case we might again break some test suite which relies on that.
we simply should not do anying per default
any choice we make breaks some peoples expectations, we should jsut let the people decide
I don't think hiding warnings by default is a good behavior for a test runner - don't all other test runners show them by default too? How do they solve that problem?
@The-Compiler this is not about showing vs hiding, but about not altering the warnings subsystem unless requested
python warnings are inherently state-full, and people register different actions for different warnings
any test-runner that changes anything by default is broken
note that warnings are not hidden by default, but library authors may choose to hide/error on certain errors, or decide to show some errors only once
changing those behaviours inwarranted breaks code
Python does hide DeprecationWarnings (and some others?) by default, because it doesn't want to confront users with those when they're simply using a library.
Test runners should turn them on (like unittest.py does too), so they aren't simply ignored.
so the first reason we add the warnings filter is because any SAWarnings, e.g. those generated by SQLAlchemy, should never occur without us adding directives that we expect them - so these all go to "error" by default, they need to raise.
The next reason we use "error" is that a lot of the older tests ensure that particular code emits warnings by doing an assertRaises() style of check, taking advantage of the fact that the SAWarnings all raise.
In order to accommodate for code that expects to emit SAWarning but we don't care, there are function decorators like @testing.emits_warning("foo") which alter the warnings filter per test to allow certain warnings to not throw an error; the fliter is then restored back after the test is complete.
Building on that approach, the new style of writing tests that's expected to emit warnings is that we have a context manager "with expect_warnings(strings)" that manipulates the warnings filter within the block to allow warnings that match our strings to go through, to assert that they were actually *seen else fail the test, and to have all other SAWarnings coninue to raise exceptions.
These last use cases require the use of highly specific set of filter operations that could not be sufficiently addressed by flags in "pytest.ini / setup.cfg".
I also agree with the argument that py.test should try not to make changes that interfere with existing approaches. Whlie we could move our global "warnings" filter to be omitted under a py.test run and instead rely upon py.test specific settings, this makes our test suite more complicated as the sutie is constructed to have a layer of indirection between which testing framework is in use.
While SQLAlchemy makes extensive use of py.test features, the dependencies on these features is contained entirely within a plugin-oriented module that invokes only when the "py.test" command is used. You can still run our tests with nose. The rationale for this is not that anyone uses nose anymore, but rather that nobody uses nose anymore - a reminder that after I and other developers spent many months converting our test suite to use nose, nose within a couple of years went unmaintained and I had to pick up and start all over again with py.test. Therefore while I love py.test, I keep its featureset at arm's length and nothing within the tests themselves refer to py.test. our test suite is always ready to accommodate other test runners should that need ever arise.
to be fair, while older SQLAlchemy versions were using ad-hoc filterwarnings() in order to get the "per-test warnings filter" behavior, currently we're just mock.patching warnings.warn() to make it simple.
@The-Compiler unittest defaults to DONT DO ANYTHING - see https://github.com/python/cpython/blob/v3.6.1/Lib/unittest/runner.py#L159
@RonnyPfannschmidt Wrong:
import unittest
import warnings
def foo():
print("foo called")
warnings.warn("This is deprecated", DeprecationWarning)
class TestFoo(unittest.TestCase):
def testFoo(self):
foo()
if __name__ == '__main__':
foo()
$ python foo.py
foo called
$ python -m unittest foo.py
foo called
/home/florian/tmp/foo.py:7: DeprecationWarning: This is deprecated
warnings.warn("This is deprecated", DeprecationWarning)
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
See this code which sets warnings unless -W is given to Python.
I really wouldn't mind if you'd actually check your stuff before shouting at people...
@zzzeek thanks for the comprehensive and detailed response! I really appreciate your time to write it, it is invaluable to us to get the whole picture. :+1:
We (the core team) feel that most warnings go unnoticed, specially deprecation warnings where action about the deprecation is usually taken only when things actualy brake, long after the initial deprecation warnings started to be emited. We made good strides towards this by always showing that some warnings were emitted in the summary ("X pytest-warnings"), and later on by finally deciding to always show warnings regardless if the user opt-in to see them (although it was easy to opt-out). Given this experience, our general conclusion is that pytest should display warnings more proemiently because otherwise very few users will bother to opt-in to see them.
So we decided to introduce the pytest-warnings plugin to the core: improve communication on deprecated pytest features and to give users ample time to deal with them at their own pace, but also important for warnings from libraries in general.
Now, your test suite has an advanced treatment of warnings and that unfortunately conflicts with pytest's own warning treatment. That's unfortunate but also understandable, and I can see where the same problem could happen with other internal plugins: the terminal plugin and pytest-xdist comes to mind and I also can imagine a test suite which does its own stdin/stdout capture can conflict with the standard pytest capture for example.
Given that pytest's warning handling is a problem only in the few advanced and rare cases, having it on by default and in consequence benefit the larger user base is the best long term decision, specially if it is easily opted out.
As @The-Compiler mentioned, this is also apparently the approach taken by the standard library:
# content of pytest_warnings.py
import unittest
import warnings
class MyWarning(Warning):
pass
warnings.filterwarnings("error", category=MyWarning)
class TestWarnings(unittest.TestCase):
def test_my_warning(self):
try:
warnings.warn(MyWarning("warn!"))
assert False
except MyWarning:
assert True
$ python -m unittest test_warnings
test_warnings:13: MyWarning: warn!
warnings.warn(MyWarning("warn!"))
F
======================================================================
FAIL: test_my_warning (foo.TestWarnings)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_warnings.py", line 14, in test_my_warning
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (failures=1)
(This is the original example posted, only adapted to standard unittest)
This further makes me believe that we are following the correct course of action: the most value out of the box for most users, with an easy way to opt-out when it is problematic. Of course we should improve the documentation on how to op-out if this is the instance we will ultimately go with.
Well that's the full picture as I see it. Would love to hear other's opinions on this.
you might notice that Requests just had the same problem and also turned off the filter, here's the thread where they had to spend a few hours figuring it out: https://github.com/kennethreitz/requests/pull/4052
I agree that displaying the deprecation warnings is a great idea but an approach that doesn't break code that wants to use the warnings filter would be best. in my own projects, very often there are API changes that would vastly improve everything but they would break the world for a lot of people, so unfortunately, you can't just turn them on - you just provide the options, document them, blog how great they are, and allow people to opt in.
This is a serious SNAFU! I'm seeing the same problem with the openpyxl CI on Python 2.7. If I run the tests for the relevant module then the warning is captured as expected but, if I run tests for the whole library then it fails. Setting the filter as per the docs does not help.
I'm confused on this but I think I have the workaround in place: if you have tests for warnings then you need to disable pytest-warnings with addopts = -p no:warnings.
If so how should tests for warnings look like without pytest-warnings disabled? If this is not possible then 3.1 broke a pretty fundamental test feature.
@zzzeek
I agree that displaying the deprecation warnings is a great idea but an approach that doesn't break code that wants to use the warnings filter would be best
Definitely agree. But looking at how warnings is implemented we are still trying to figure out if that's even possible, unfortunately.
in my own projects, very often there are API changes that would vastly improve everything but they would break the world for a lot of people, so unfortunately, you can't just turn them on - you just provide the options, document them, blog how great they are, and allow people to opt in.
We didn't expect the introduction to break existing suites, and for that we apologize. We try our best to ensure we don't break existing suites in minor releases, but this went out in 3.1 and I'm not sure if rolling the change back (or defaulting it to opt-in) in a new 3.2 release would help, because by this point I suspect most people who had trouble with warnings has already disabled the internal plugin.
@Themanwithoutaplan
If so how should tests for warnings look like without pytest-warnings disabled? If this is not possible then 3.1 broke a pretty fundamental test feature.
Does your test suite deal with the warnings module directly, or is the matter that one specific test which uses pytest.warns() or one of the other warnings-related facilities is not working as expected? If the first then it might be that your customization conflicts with the internal warnings plugin in which case disabling it might be the best solution. If the latter, it sounds like a bug and we should have it fixed (preferably in a separate issue).
@zzzeek just curious: in your opinion, if you were to embrace the new pytest warnings feature in 3.1 by throwing away your customization would that improve the test suite? Would that be relatively simple to do? Just as a thought exercise, disregarding your point about having a layer over the test runner.
I'm under the impression that the new plugin will be a problem in cases where the test suite implements/customizes its own handling of warnings and that creates clashes. In this case it might be better to just disable the plugin to keep things working, but users might consider to throw away their own customizations and use the built in features.
The test that randomly fails is at https://bitbucket.org/openpyxl/openpyxl/src/44212297981845d4cfad77e879b812ebd2d7171e/openpyxl/reader/tests/test_worksheet.py?at=2.4&fileviewer=file-view-default#test_worksheet.py-470
Note I had to play around with the configuration a few years ago for similar reasons.
I don't have many tests that check for warnings so it's not too much work to change them but this change did trip me up. The new context manager looks nicer than recwarn but I couldn't get it to work for much the same reason.
@nicoddemus if i werent using py.test, and i had a simple test suite where we just want our own warnings to raise an error and for all python deprecation stuff to emit, then setting up a configuration level would be more palatable, although even then, a filter that wants to ensure a certain class of warnings is errored is still IMO a little better expressed in code rather than config since it refers to a class that's part of the library being tested.
next, if my test suite needed to do per-test alterations to the warnings filter so that we can assert or ignore various warnings, that has to be done in code w/ decorators, context managers. so if the feature here only allows config-level flags, that's not sufficient in any case.
@Themanwithoutaplan
I don't have many tests that check for warnings so it's not too much work to change them but this change did trip me up.
Indeed, we should make that more obvious in the docs.
@zzzeek
a filter that wants to ensure a certain class of warnings is errored is still IMO a little better expressed in code rather than config since it refers to a class that's part of the library being tested.
This is a very valid point, thanks. This seems like a feature request though, would you like to create a new issue for it so we can track it?
Having a warning system is hugely useful.
Defaulting to all warnings on is not.
Even if everyone decides to leave this feature in, it's broken.
_Only reasonable course of action is to back it out until it's more thoroughly tested.ASAP_
1. This code shows a warning that should not be on by default:
test_something.py
class TestHelper():
def __init__(self):
pass
def someNonTestStuff(self):
pass
(venv) $ pytest test_something.py
========================== test session starts ==========================
platform darwin -- Python 3.6.1, pytest-3.1.0, py-1.4.33, pluggy-0.4.0
rootdir: /Users/okken/foo, inifile:
collected 0 items
=========================== warnings summary ============================
test_something.py::TestHelper
cannot collect test class 'TestHelper' because it has a __init__ constructor
-- Docs: http://doc.pytest.org/en/latest/warnings.html
====================== 1 warnings in 0.00 seconds =======================
2. --strict doesn't work
(venv) $ pytest --help
...
--strict run pytest in strict mode, warnings become errors
...
(venv) $ pytest --strict test_something.py
========================== test session starts ==========================
platform darwin -- Python 3.6.1, pytest-3.1.0, py-1.4.33, pluggy-0.4.0
rootdir: /Users/okken/projects/book/bopytest/Book/code/foo, inifile:
collected 0 items
=========================== warnings summary ============================
test_something.py::TestHelper
cannot collect test class 'TestHelper' because it has a __init__ constructor
-- Docs: http://doc.pytest.org/en/latest/warnings.html
====================== 1 warnings in 0.01 seconds =======================
Hi everyone,
After a discussion on the mailing list the team decided to release 3.2.0 with the new plugin being opt-in instead of being active by default (#2443).
Again we apologize for the problems many of you had; it was not our intention in the least. We just did not foresee that the new plugin could break some test suites, otherwise we would have never released it active by default.
If all goes well, we should see a new release later today or tomorrow.
Thanks everyone for their feedback and understanding! 馃憤
@okken
This code shows a warning that should not be on by default:
Hi Brian, just now saw this sorry. Actually that warning should appear by default because it helps users realize why that test class is not being collected; in fact if I'm not mistaken that warning already appears in pytest 3.0.
--strict doesn't work
Definitely, opened a new issue for it: #2444.
Most helpful comment
@zzzeek you can disable the warnings plugin by passing
-p no:warnings; this will disable the plugin completely and you should get green builds again. You can choose to add this to yourpytest.inifile:As for the problem itself, I will take a more careful look later.