Pytest: Support for Python 3.4 unittest subtests

Created on 9 Feb 2016  Β·  42Comments  Β·  Source: pytest-dev/pytest

Unittest in Python 3.4 added support for subtests, a lightweight mechanism for recording parameterised test results.

At the moment, pytest does not support this functionality: when a test that uses subTest() is run with pytest, it simply stops at and reports the first failure.

Pytest already supports its own, pytest-specific parametrizing mechanism, but it would be convenient if the pytest runner also supports the standard library's subTest() functionality.

help wanted reporting backward compatibility enhancement feature-branch refactoring

Most helpful comment

For those watching this issue: #4920 and https://github.com/pytest-dev/pytest-subtests will add both subTest() support and a new subtests fixture. πŸŽ‰

An external plugin is a better approach here IMO because it allows it make more frequent iterations on the details and mature outside of pytest's release cycle. After the plugin matures, we can consider introducing it into the core.

It will be ready for consumption after the 4.4 release. πŸ‘

All 42 comments

There's some relevant but old discussion on the testing-in-python mailing list: "unittest subtests" (January 2013)

It's important to note that pytest parametrize and subtests are fundamentally completely different

While this is a nice feature, it shouldn't block 3.0. I also think this would be some more serious work due to pytest's separation of collection/running, so I'm removing the milestone here.

Any updates? Any plans to add support of this feature in nearest months?

@aksenov007 As far as I know, nobody is working on it currently - though a PR for it would certainly be appreciated! :wink:

Any idea if subtest feature is supported in pytest? It is an important feature to have.

@shreyashah pytest supports parametrization and in general is a perfect replacement for a test using subtest, but it has a few differences in implementation and philosophy.

I thought this days that we could use a mechanism similar to what pytest-rerunfailures does, where it would be probably implemented by a fixture:

def test(subtests):
    for i in range(10):
        with subtests.test():
            assert i % 2 == 0

The fixture would trap assertion errors and trigger extra pytest_runtest_logreport for each error. Of course the devil is in the details, but it is an idea.

@nicoddemus : Thank you for the reply. I understand the pytest's parameterization feature, it is indeed a great feature but still that doesn't solve the purpose of not having subtests. Like you mentioned, I want to trap the error until all the subtests finish and finally pop up the error/or raise the first encountered exception at the end of test case. This helps when a test wants to check multiple aspects of a feature and all those aspects have to be inside one test case in form of subtests or steps

I will look into the implementation of pytest-rerunfailures to see if i can reuse some logic to implement this.Thanks again.

For collecting errors and only showing them at the end you should look at
pytest-assume. But you'll still get one test failure, without per-check
reporting.

10.11.2017 09:42 "sha" notifications@github.com napisaΕ‚(a):

I will look into the implementation of pytest-rerunfailures to see if i
can reuse some logic to implement this.Thanks again.

β€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/pytest-dev/pytest/issues/1367#issuecomment-343411982,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAQ9_VNVXLnHtR673OIQVteZgKJlBPLIks5s1AxsgaJpZM4HWCNz
.

I meant of collecting the errors and showing them at the end is with respect to each testcase. For example: if a test case has 4 subtests or 4 steps in it, with the current behavior, if subtest/step 2 fails, then step3 and 4 will never be executed since the test case aborted from step2's failure/exception.
My goal is to be able to execute all the steps/subtests even if there is an intermediate step failure. We can store this intermediate exception/failure and report it at the end when all subtests/steps are executed and mark the test case as fail because there was a step failure

when going outside the scope of pure unit testing.
I.e. integration test or acceptance tests.

in cases where the setup is really time consuming. testing on android/ios real devices or other embedded devices, when you can scale your tests with just throwing cpus from AWS on it.

those type of things is highly needed, for accurate reporting where what failed.

we haves lots of those type of test, and I'll love to see py.test blessed way to address it.

I can help with trying any suggestions, and also with reviewing from a users POV.

FWIW there's also pytest-assume.

edit: Whoops, sorry, @ktosiek already mentioned that.

pytest-assume looks nice, but what about exceptions ?
we have lots of helper code that uses asserts and other code that can raise ones, for example connection timeouts and such.

I think this alone isn't enough...

I need support for subtests or something like them because of a distributed test environment. I have a controller that goes and sets up an entire collection ofVMs, networking, and potentially even physical hardware. Then the controller runs a series of tests on some nodes in the hardware and would like to collect results from these interior tests.
One approach I could use is to write a custom collector, and during the setup phase do the environment setup.
I don't really like this approach for a couple of reasons:

  • The controller needs to be able to enumerate all the tests. In some cases that requires I keep the software under test and the controller more in sync than I like. Alternatively I'd need to be able to guarantee that the environment on the controller machine was sufficient to run all the potential tests at least enough to collect them; that's also not something I prefer. I could go set up the environment in the collect phase, but that's very expensive in some cases and something I'd like to avoid. So, I think I'd rather be able to Report results that are not enumerated during the collection phase

  • I definitely do not want to run the tests in the runtest hook of the Items. I want to go ask the node under test to go run a bunch of tests and come back with results. I don't want to give it tests as a one-off for each test. I don't know if a Collector can have a nontrivial runtest hook; if so, that would be a work around for this.

  • Even if I was able to run the tests in the runtest hook of the controller, I think that would make things a bit complicated. It'd be fine if things really were as simple as go set up the environment, run test series alpha on node 1 and series beta on node 2. But it might be more like set up environment, run alpha on node 1, adjust the environment, make some assertions that can only be made from outside of the systems under test, run beta on node 2. So, I actually think I want the user to be able to write supertests that collect a bunch of subtest results from the distributed systems and report those results.

I've considered pytest-xdist and it seems like the wrong direction in a number of ways. It seems targeted for a different problem than I'm solving, and while a big enough hammer can pound square nails into round holes, that doesn't make it a good thing to do so.

So, I really do think I want something that looks a lot like subtests.
But it doesn't seem very hard to get what I want. As a simple test I did:

import pytest
import _pytest.runner as runner

def test_foo(request):
    n = pytest.Item("bar", parent = request.node)
    n.runtest = lambda *x: True
    res =     runner.call_and_report(n, when = "call", log = True)

That seems to log an interior successful result inside test_foo.
I'm violating a couple invariants:

  • I've created a non-terminal Item

  • I've created nodes outside of the collection process.

And yet, it seems like writing a subtest_pass and subtest_fail function that took a request, name and optional exception and did something like the above would give me over 90% of what I need.

I don't get to select subtest by keyword or exclude them.
I think this might fail horribly with xdist.
But I think something relatively simple like the above would meet my needs and those of several of the others who commented.
I don't need help moving forward; I'm commenting here to seek input on pitfalls I may have missed and on how much of the use cases people hav this covers.

That ins should habe gone to a New issue or the ml

I've decided to give my idea a spin, it was surprisingly easy (and made possible only due to the recent refactorings by @RonnyPfannschmidt to CallInfo):

# conftest.py
from contextlib import contextmanager
from time import time

import attr

import pytest
from _pytest._code import ExceptionInfo
from _pytest.runner import CallInfo


@pytest.fixture
def subtests(request):
    yield SubTests(request.node.ihook, request.node)


@attr.s
class SubTests:

    ihook = attr.ib()
    item = attr.ib()

    @contextmanager
    def test(self):
        start = time()
        exc_info = None
        try:
            yield
        except Exception:
            exc_info = ExceptionInfo.from_current()
        stop = time()
        call_info = CallInfo(None, exc_info, start, stop, when='call')
        report = self.ihook.pytest_runtest_makereport(item=self.item, call=call_info)
        self.ihook.pytest_runtest_logreport(report=report)

Using this test case:

# test.py
def test(subtests):
    for i in range(5):
        with subtests.test():
            assert i % 2 == 0

Produces:

======================== test session starts ========================
...
collected 1 item

.tmp\subtests\test.py .F.F..                                   [100%]

============================= FAILURES ==============================
_______________________________ test ________________________________

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x00000181619A6E10>, item=<Function test>)

    def test(subtests):
        for i in range(5):
            with subtests.test():
>               assert i % 2 == 0
E               assert (1 % 2) == 0

.tmp\subtests\test.py:7: AssertionError
_______________________________ test ________________________________

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x00000181619A6E10>, item=<Function test>)

    def test(subtests):
        for i in range(5):
            with subtests.test():
>               assert i % 2 == 0
E               assert (3 % 2) == 0

.tmp\subtests\test.py:7: AssertionError
================ 2 failed, 4 passed in 0.06 seconds =================

While it was simple, I see two problems:

  • I can't seem to be able to customize the node id displayed, so it is not easy to distinguish each each subtest invocation.

  • We have 6 reports, 5 for the subtests and one for the other test.

Also there's the question how this will play with other plugins (xdist works fine).

@RonnyPfannschmidt do you see other problems with the implementation? What's your opinion?

We should introduce a new report type, subtests in their first iteration are fundamentally different ffrom tests

I might be wrong on this, because we're just considering switching over to pytest from unittest so I don't yet understand everything about pytest...

But, it looks like the recent comments have been more about implementing subtest-like features in pytest and not about the original point of this issue which is supporting running unittest.TestCase tests that use subtests. (Though, I can envision a system where pytest has to implement its own subtest functionality first. Like I said, I don't understand the details of pytest yet)

Anyway, I just wanted to comment in case people were getting off-track of the original purpose of this issue.

Just to highlight a reason for supporting unittest.TestCase subtests:

We have a couple thousands tests implementing unittest.TestCase and almost all of them make use of subtests. The best and most feasible way for us to migrate over to pytest is by taking advantage of it's ability to run unittest.TestCase tests, and if it won't work with subtests we just can't use it at all.

I agree with @dmwyatt having pytest to support unittest subtests is the goal here. luckily for us all our tests started as py.test. and I'm really excited that this is gaining traction again.

As for the report also for the main test, I think that's how unittest subtests also behaves (or it least sounds reasonable to me). as for id/names, we not title them the same way we do parameterized tests I.e. test_01[subtest title]

well, the thing is - to support subtests sanely pytest needs at least a basic mechanism for reporting them as well

else it would just be pretended

Thinking about this a bit further, I believe we should use a separate hook, say pytest_runtest_logreport_subtest besides using a separate report type. This will ensure we don't break other plugins. Then it is a matter of implementing the hook in terminal and make sure xdist transfer the call from worker to master.

What do you think @RonnyPfannschmidt?

lets mark the hook experimental and see where it goes, im not going to dig deeper into it anytime soon as im currently not active

For those watching this issue: #4920 and https://github.com/pytest-dev/pytest-subtests will add both subTest() support and a new subtests fixture. πŸŽ‰

An external plugin is a better approach here IMO because it allows it make more frequent iterations on the details and mature outside of pytest's release cycle. After the plugin matures, we can consider introducing it into the core.

It will be ready for consumption after the 4.4 release. πŸ‘

@nicoddemus: Awesome, thank you for letting us know! :coffee:

@nicoddemus this is nice, does it play nice with the junit-xml output ? i.e. those subtests looks like regular tests in reports ?

Yes, given this file:

from unittest import TestCase, main

class T(TestCase):

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
                self.assertEqual(i % 2, 0)

if __name__ == '__main__':
    main()

Here's the report:

<?xml version="1.0" encoding="utf-8"?><testsuite errors="0" failures="2" name="pytest" skipped="0" tests="3" time="0.087"><testcase classname="test_foo.T" file="test_foo.py" line="4" name="test_foo" time="0.024"><failure message="AssertionError: 1 != 0">self = &lt;test_foo.T testMethod=test_foo&gt;

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg=&quot;custom&quot;, i=i):
&gt;               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_foo.py:8: AssertionError</failure><failure message="AssertionError: 1 != 0">self = &lt;test_foo.T testMethod=test_foo&gt;

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg=&quot;custom&quot;, i=i):
&gt;               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_foo.py:8: AssertionError</failure></testcase></testsuite>

pytest 4.4 is out. I will be working on releasing pytest-subtests in the next few days. πŸ‘

pytest-subtests 0.1.0 is out. Would appreciate if people could give it a try and let me know if they encounter any issues in https://github.com/pytest-dev/pytest-subtests/issues. πŸ‘

I'm closing this now then; if we decide to merge this into the core in the future we can create a new issue.

Thanks everyone for participating!

Yaaay! Thank you. :smiley:

For those watching this issue: #4920 and https://github.com/pytest-dev/pytest-subtests will add both subTest() support and a new subtests fixture. πŸŽ‰
An external plugin is a better approach here IMO because it allows it make more frequent iterations on the details and mature outside of pytest's release cycle. After the plugin matures, we can consider introducing it into the core.

@nicoddemus Thanks for writing a great solution to this issue and making it easily available! Is it mature enough now to be rolled into pytest, at least the subTest part? The current single failure behavior when running unittest's subTest is rather unfortunate, and I think a fix in the core is in order...?

Hi @GiliR4t1qbit,

I think the functionality per-se is desired sure, about the plugin being mature I'm not so sure as there are still a number of issues that need to be addressed.

Perhaps you could open a new issue/proposal so this can be discussed there further?

Hi @nicoddemus - you are right, let's limit this to just the functionality in the title of this issue: support for unittest sub tests. In particular, it should always run all subtests (and not just to first failure) and print appropriate messages along the way. I'm not sure it's appropriate to open a new issue, since this issue covers this topic exactly...? How about just re-opening this issue? (maybe only admins can do that, I don't have the option, I think)

Thanks @GiliR4t1qbit, reopening.

Let's see what other maintainers think: @RonnyPfannschmidt @bluetech @asottile @The-Compiler

I haven't really looked at the plugin in detail, but IMHO, anything that improves unittest.py compatibility should be in the core - we claim pytest supports running Python unittest-based tests out of the box., so if we don't support certain unittest.py features, that could be seen as a "bug" (albeit a documented one in this case).

Another point to consider: I suspect people who will run into this are mostly people who just discovered pytest and want to run their existing testsuite before e.g. considering a migration (or otherwise "buying into" pytest). They might not be aware that plugins exist, much less that a specific plugin for this exists. Thus, this might drive them away ("pytest can't run our project, so we can't even try it out, no way we'll migrate").

Another point to consider: I suspect people who will run into this are mostly people who just discovered pytest and want to run their existing testsuite before e.g. considering a migration (or otherwise "buying into" pytest). They might not be aware that plugins exist, much less that a specific plugin for this exists. Thus, this might drive them away ("pytest can't run our project, so we can't even try it out, no way we'll migrate").

Yes! But, it's even worse than that, since this is a silent failure... It would be reasonable for user to expect that pytest would run all the subtests, like unittest does, but in fact it only runs to first failure. You might or might not notice that. I didn't for quite a while!

Would it be sufficient to have the feedback from that first failure notify the user? If all the subtests are pass, then they have all run as expected, and no problem. Or a note could just be output with the report when pytest detects subtests are run, whether or not they all pass, much like other warnings.

Would it be sufficient to have the feedback from that first failure notify the user? If all the subtests are pass, then they have all run as expected, and no problem. Or a note could just be output with the report when pytest detects subtests are run, whether or not they all pass, much like other warnings.

I don't think it would be sufficient. If I have 10 subtests and subtest 3 and 8 are failing. If I'm working on fixing subtest 8, I cannot check if my code actually fixed the test or not because this subtest is not even run. That's pretty annoying.

Currently pytest-subtests provides two distinct functionalities: support for unittest.TestCase.subtests and a new subtests fixture.

We might consider just adding support for the former as the latter is a bit controversial (and it is given the numerous issues regarding how the report of failing subtests should look like).

Would it be sufficient to have the feedback from that first failure notify the user? If all the subtests are pass, then they have all run as expected, and no problem. Or a note could just be output with the report when pytest detects subtests are run, whether or not they all pass, much like other warnings.

I don't think it would be sufficient. If I have 10 subtests and subtest 3 and 8 are failing. If I'm working on fixing subtest 8, I cannot check if my code actually fixed the test or not because this subtest is not even run. That's pretty annoying.

Indeed, but you'd have the suggestion to install the plugin to get that info, that's all I was saying, presuming support wasn't going to be provided by pytest itself.

Was this page helpful?
0 / 5 - 0 ratings