Mocha: failures in before() and beforeAll() will only report a single test failure, regardless of how many tests are sidelined by that failure

Created on 31 Jul 2020  路  3Comments  路  Source: mochajs/mocha

Description

every other testing suite that i've come across in my career (including, since it's a peer-ish library to mocha, jest) will consistently report results for all tests on each run - more specifically, it will show a test failure (or error, which is distinct from a "failure", depending on the framework) for each test in the suite(s) you're trying to execute, regardless of whether that failure occurred in a setup phase or in the test itself. mocha, however, appears to roll test setup failures that occur in before/beforeEach hooks into a single "failure", effectively swallowing counts of all the tests that were sidelined by the setup failure. this makes it rather frustrating to compare outputs between runs, understand the number of tests impacted by that setup failure at a glance, and can make reporting metrics from test runs somewhat meaningless due to the variance in the number of tests actually being reported on.

additionally, it seems that a failure in beforeEach will just circuit-break the execution of that setup function entirely, instead of retrying for subsequent tests, which honestly contravenes the name and intent of the function in the context of software testing conventions.

Steps to Reproduce

Expected behavior:

  • a suite containing N tests (regardless of how they're nested in subsuites) should report on the outcome of N tests, even if there is a failure in a before or beforeEach hook.
  • examples (modified from output of failing runs) of what i'd expect to see:

Actual behavior: [What actually happens]

  • a suite with a failure in a before or beforeEach hook will only report a single failure.

Reproduces how often: 100% - it seems to be by design..? so perhaps this should be a feature request instead?

Versions

  • mocha --version: 8.0.1
  • node node_modules/.bin/mocha --version: 8.0.1
  • node --version: v12.18.1
  • Operating system

    • name and version: osX Catalina 10.15.5

    • architecture (32 or 64-bit): x64

  • Shell (e.g., bash, zsh, PowerShell, cmd): zsh 5.7.1 (x86_64-apple-darwin19.0)
  • browser and version: N/A
  • Any third-party Mocha-related modules (and their versions): N/A
  • Any code transpiler (e.g., TypeScript, CoffeeScript, Babel) being used (and its version): none
usability

Most helpful comment

Some notes:

  • If you have a failing hook at or near the root of your test suites, the reporter output will repeat the same error, potentially _many, many_ times; we should consider showing a unique error _once_ in the reporter's epilogue.
  • This would likely be a high-impact breaking change for tools built upon Mocha
  • This may require changes to built-in reporters AND third-party reporters
  • There is arguably a conceptual difference between a hook failure and a test failure (seems like a philosophical opinion, to me). Understanding whether a test is failing because its assertions are failing, or if it's failing because an associated "hook" is failing may become murky. Mocha is very explicit about this, as you've discovered. :smile:
  • Knowing how many tests are _supposed_ to be run vs. how many were _actually_ run is something Mocha does not do, but probably should. This isn't the same as enumerating all tests up-front, because we don't necessarily have that information (parallel mode will lazily load test files; the best we can do is consider each file individually). The root suite for any given test file will "know" about how many suites and tests are contained therein before running them, however, and this could be aggregated.

Assuming you just want tests _reported_ if their hooks fail, this could be implemented in a non-breaking, opt-in manner: Mocha's runner could emit new events. Third-party reporters could then output this additional information.

All 3 comments

If before or beforeEach fails, would you expect the tests to be _run_ or would you just expect them to be _reported_?

Some notes:

  • If you have a failing hook at or near the root of your test suites, the reporter output will repeat the same error, potentially _many, many_ times; we should consider showing a unique error _once_ in the reporter's epilogue.
  • This would likely be a high-impact breaking change for tools built upon Mocha
  • This may require changes to built-in reporters AND third-party reporters
  • There is arguably a conceptual difference between a hook failure and a test failure (seems like a philosophical opinion, to me). Understanding whether a test is failing because its assertions are failing, or if it's failing because an associated "hook" is failing may become murky. Mocha is very explicit about this, as you've discovered. :smile:
  • Knowing how many tests are _supposed_ to be run vs. how many were _actually_ run is something Mocha does not do, but probably should. This isn't the same as enumerating all tests up-front, because we don't necessarily have that information (parallel mode will lazily load test files; the best we can do is consider each file individually). The root suite for any given test file will "know" about how many suites and tests are contained therein before running them, however, and this could be aggregated.

Assuming you just want tests _reported_ if their hooks fail, this could be implemented in a non-breaking, opt-in manner: Mocha's runner could emit new events. Third-party reporters could then output this additional information.

  • i agree; repeating the same error ad nauseum isn't terribly helpful.
  • definitely agree, at least if it is defined as the new default and isn't switchable.
  • i would expect as much, since the contract between them would potentially be changing.
  • i also agree that clearly and explicitly differentiating between hook and test failures is a good idea - again, the use case i'm trying to express here isn't necessarily that they should be called a "failed test", but simply that each test affected (i.e. that was expected to run but couldn't) is reported on.
  • segueing from the above point: what you are talking about in the last point is basically what i was trying to express here: the consistent reporting of tests that are expected to run vs actually run. showing the before/beforeEach stacktrack/error once is totally fine and reasonable, but reporting the actual tests downstream of that error as "failed" (or "blocked", or whatever other term to indicate they couldn't even start execution, though it should be distinct from "skipped" imo) would be a pleasant improvement in specificity.

also, i definitely understand that changing this sort of functionality (and specifically, making it the new default) could totally hose a lot of people's projects / CI/CD, but as you also mention above, making the feature an opt-in switch on the runner would be a really nice and relatively non-disruptive way to do this.

thanks for your detailed response!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

robertherber picture robertherber  路  3Comments

luoxi001713 picture luoxi001713  路  3Comments

wzup picture wzup  路  3Comments

danielserrao picture danielserrao  路  3Comments

3p3r picture 3p3r  路  3Comments