Mocha: Continue large test suite where I left off

Created on 7 May 2015  路  9Comments  路  Source: mochajs/mocha

When running through a large (and slow) test suite, fixing errors one by one, it becomes tedious to run through all the passing tests each time to find the next error.

It would be great if mocha was able to keep a log of tests which had passed, so that it can quickly skip all these when running again after fixing an error.

Does anyone have any thoughts about how this could be implemented?

Thanks.

feature help wanted semver-minor

Most helpful comment

Kind of, yes, but as is the unpleasant case, our test suites tend to be somewhat stateful, much like browser tests. For example, the beginning of a suite creates a temporary database in the backend from a fixture, and suite end tears it down.
If I had to choose, I would value this feature over #1773, although #1773 is also useful in its own right.

As for the parsing of a report - since mocha has numerous reporters with wildly varying output, and multi-reporting isn't even in the core yet (?), I'd much prefer an internal standard way of rerunning tests.

The unique identification of tests, which would be a prerequisite of this feature, is also an issue which I believe should be resolved in the core, rather than each user rolling their own solution by way of test naming schemes. Somewhat related: #1445

All 9 comments

This isn't a bad idea, but it's not entirely trivial.

$ mocha --rerun-failing

The first time you execute this, we could dump a .mocha-failing file into /tmp or something. The next time you run it, it will read .mocha-failing and skip the tests which passed. Similar behavior could be achieved via perhaps localStorage in the browser.

If it's to use the skip functionality in Mocha, then this file should be a list of regular expressions matching test names:

^should do the damn thing$
^should turn on the funk motor$
^should get up offa that thing$

But if two tests have the same name, you're looking at the potential for false positives.

I'd like some opinions from @mochajs/mocha. This is going to cater to users with slow integration tests, and I feel like Mocha could be stronger with features geared towards non-unit-test use cases.

I really like this idea. :) But rather than --rerun-failing, how about --failures-first? That way you don't skip tests that may have passed on the initial run, but started failing and went unnoticed as mocha kept running. We could just prioritize specs that failed, which I think offers the same benefits.

I am actually doing this by just parsing the mocha json report.

Sounds interesting :)

Our company is using mocha for REST API testing, and the test load and execution time will only increase in the future. A "rerun failed" option is already becoming a high priority for us, even with only 75 tests at hand.
In API testing this feature is important in a special way, since after a test fails in CI, the first thing you want to make sure is that it's not a one-off problem due to network latency, bad backend state, or whatever. And you do that by rerunning fails automatically, at least once :)
The Robot Framework, which we employ for browser testing, has already had this feature for two years.

A dup of #1773 ?

@chenchaoyi no absolutely not.

I don't want mocha to retry tests while running them. I want it to fail, I fix the failing test, and try again, skipping all the tests that already passed.

Then this could be done by parsing the JSON report and rerun using -g option, if you have unique identifier to each of the test.

What @noorus mentioned might be #1773 then.

Kind of, yes, but as is the unpleasant case, our test suites tend to be somewhat stateful, much like browser tests. For example, the beginning of a suite creates a temporary database in the backend from a fixture, and suite end tears it down.
If I had to choose, I would value this feature over #1773, although #1773 is also useful in its own right.

As for the parsing of a report - since mocha has numerous reporters with wildly varying output, and multi-reporting isn't even in the core yet (?), I'd much prefer an internal standard way of rerunning tests.

The unique identification of tests, which would be a prerequisite of this feature, is also an issue which I believe should be resolved in the core, rather than each user rolling their own solution by way of test naming schemes. Somewhat related: #1445

Was this page helpful?
0 / 5 - 0 ratings

Related issues

3p3r picture 3p3r  路  3Comments

Aarbel picture Aarbel  路  3Comments

jamietre picture jamietre  路  3Comments

wzup picture wzup  路  3Comments

danielserrao picture danielserrao  路  3Comments