Other testing frameworks allow tests to asynchronously decide whether they should skip themselves.
For example, in Mocha:
it('tests a remote service', async function() {
if (!(await remoteService.isActive())) {
return this.skip()
}
… test the remote service …
})
Currently, however, it is impossible to asynchronously decide whether a test should be skipped in Jest.
See also: discussion here: https://github.com/facebook/jest/issues/7245
Some tests either depend on - or are explicitly testing - remote services which may or may not be available.
Without being able to programatically and asynchronously decide whether tests can be skipped, there are only three options for writing these sorts of tests:
Decide that they will either always pass or always fail if the service is unavailable. In either case the result can be misleading (ie, because in many cases "failure" indicates "the service is wrong", not merely "the service is unavailable", and "passing" suggests that everything is okay, which is also not necessarily true).
Keep them in a separate suite, one per remote service, which can be run with (for example), npm run test:service-a
).
Use a regular expression (or similar) to include / exclude these tests from a test run.
A complete, real-world (but anonymized) example from a Mocha-based test suite:
describe('example.com', () => {
beforeAll(async function() {
try {
await fetch('https://example.com/api/whoami')
} catch (e) {
return this.skip(`Skipping tests for example.com: ${e}`)
}
})
it('returns the current user', async () => { … })
it('does the other thing', async () => { … })
})
This belongs in Jest core because:
this.skip
)A common suggestion in #7245 is to use an if-statement (or similar) to skip tests:
describe('example.com', () => {
const isActive = remoteService.isActive()
if (isActive) it('returns the current user', async () => { … })
})
However, this will not work for asynchronous tests, as tests must be declared synchronously, but the "is a remote service active?" check is necessarily asynchronous.
There are situations when this is true, but (as evidenced by discussion on #7245) there are also situations where "skip tests when a remote service is not available" is a reasonable business decision (ex: https://github.com/facebook/jest/issues/7245#issuecomment-474587219)
I struggle to see the value of this feature, you essentially want to be have support for a sort of pseudo false-positive state. When service A
is unavailable then skip test B
, what do you want to do with that information?
By skipping in a CI environment you will receive no notification of this state without explicitly going in and checking, what is the difference between just leaving the test as a pass when the service is unavailable?
My 2cents on this matter (quite an interesting thing to think about btw):
@mattphillips I guess there might be no practical difference but the semantic difference is huge. This matters if you want to use the information produced by Jest to determine what happened during the build.
A practical example: consider a test which depends on an external service. I push new code which would make that test fail and introduce a regression. When CI runs the checks on the code I’ve pushed, the service was not available but the test is marked as passed
, even though it didn’t run. My code then fails in production but I have no way of determining whether it was because the test was flaky or if it was because it didn’t run.
The false positive state, as you have described, would be to have a test which is actually not running but being marked as passed
.
Now, I can understand the appeal for this feature as it has a real world correlation, but one might argue that introducing it in would encourage — or at least acknowledge that it is common to write — tests that aren’t deterministic (I.e. tests which rely on something you do not have control over). On the other hand, there doesn’t seem to be any reasonable manner to implement this outside of the core
and it might be a decision from Jest that it doesn’t want to be prescriptive over what it’s users should do. In the case described by @wolever, for example, it seems reasonable to have tests being asynchronously skipped.
To sum it up: I’d say this is a valid feature, even though it’s encouraged only for a minority of cases. Since in those cases there doesn’t seem to be a workaround which is “good enough” (or precise enough), I’d say it would be ok to have it in.
What about some cascading?
Like if precondition test will fail
only one error will be reported with count of skipped tests?
If i wrote test for 3rd party service availability
only this one fails and I can see real reason why other tests not run?
Feels much better than seeing hundreds of failing tests.
Thanks @lucasfcosta - that's a great summary of the problem.
I would also add that the "conditional skipping" can also be environmentally aware. For example, in my environment, there are certain tests which are skipped only if they are running in a development environment (ex, tests which require a certain amount of local setup), but required in a CI environment:
describe('some complicated service', () => {
beforeAll(async () => {
const shouldSkip = (
process.env.NODE_ENV == 'development' &&
await checkServiceAvailability()
)
if (shouldSkip)
return this.skip(`Skipping tests because ComplicatedService is not running (hint: run it with ./complicated-service)`)
})
… rest of the tests …
})
@mattphillips What would your solution be for platform specific tests?
It would be wildly inappropriate to falsely mark tests as passed or failed when it doesn't even make sense to run the test.
The questions we want our test frameworks to answer are: Can I run this test? Did the test pass? A boolean Pass/Fail cannot answer both of those question.
jest can mark tests as skipped, I'm using this hack, but it only works synchronously:
const haveDb = !!process.env.DB_CONNECTION_STRING;
const testDb = haveDb ? test : test.skip;
Any update on this ?
To add to @langpavel's idea, you could have an asynchronous check in a custom testEnvironment
class.
const { connect } = require("amqplib");
const { EventEmitter } = require("events");
const NodeEnvironment = require("jest-environment-node");
const { logger } = require("../src/logging");
class TestEnvironment extends NodeEnvironment {
constructor(config) {
super(config);
}
async setup() {
await super.setup();
// rabbit
try {
this.global.amqpConnection = await connect({
username: "admin",
password: "admin",
});
this.global.channel = await this.global.amqpConnection.createChannel();
this.global.hasAmqp = true;
} catch (err) {
logger.warn("AMQP is not available. Skipping relevant tests.");
this.global.hasAmqp = false;
}
}
async teardown() {
if (this.global.amqpConnection) {
await this.global.amqpConnection.close();
}
await super.teardown();
}
}
module.exports = TestEnvironment;
describe("example", () => {
const hasAmqp: boolean = (global as any).hasAmqp;
const channel: Channel = (global as any).channel;
it = hasAmqp ? test : test.skip;
it("should send/receive", async () => {
// ...
});
});
I want to add a use case for programmatically skipping tests:
Our product runs on multiple platforms & uses the same Jest tests for each platform. Sometimes a platform implementation is not yet complete, the product will report this dynamically via API (but only after setup in beforeAll()).
Our tests also use snapshot matching. What happens is, if a platform doesn't support a feature and the test bails out, Jest will complain about the unused snapshot, resulting in a test failure.
Our current workaround is to use 'inline snapshots', but this is exceedingly cumbersome.
A programmatic skip() API would allow our tests to skip when a feature is not available on a certain platform, and also avoid having the Jest engine complain about unused snapshots.
I created a proof of concept PR (#9944) that demonstrates how easy this is to implement in jest-circus. I think it is well worth adding this functionality to jest-circus as it allows to implement custom bailing behaviors (#6527) or stepped tests (#8387) in user land easily.
I found this issue because I'm writing a series of integration tests; I'd like to write my tests so that they can run against any deployment of the API, but only run certain tests if that deployment has a feature flag enabled, e.g.:
Works
describe('feature available in some deployments but not others', () => {
const enabled = getFeatureFlag();
(enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
});
Does not
describe('feature available in some deployments but not others', async () => {
const enabled = await getFeatureFlag();
(enabled ? it : it.skip)('runs a test that is only sometimes applicable', () => { /* ... */ });
});
An async describe, async setupFilesAfterEnv or a jest.skip()
method I can call in the context of a test or before method would all work. In the meantime this sort of workaround feels doable, but ugly, because it leaves the developer wondering whether the test was run or not.
describe('feature available in some deployments but not others', () => {
it('runs a test that is only sometimes applicable', async () => {
const enabled = await getFeatureFlag();
if (!enabled) { return; }
});
});
I'd like to add another concrete use-case for the asynchronous, skip-from-within-a-test feature:
I've created a package for use with Jest and Mocha that lets you create a sequence of tests which are dependent on each other. For example:
let { auth, createPost, createComment, getComments, logOut } = require("../examples/TestApp");
let { chainedTest } = require("@art-suite/chained-test");
// NOTE: This file INTENTIONALLY fails in Mocha to demonstrate how failures are handled.
const aliceEmail = "[email protected]";
const postBody = "The quick brown fox jumped over the lazy dog.";
const commentBody = "Brilliant!";
// The return-result of this first test will be passed as the second argument
// to all subsequent tests in the chain.
chainedTest("Alice's user story", () => auth(aliceEmail))
// In "then" tests, the test's return value is passed to the next test.
// skipped: if neither this nor any dependent tests are selected by test framework
.thenIt("lets Alice create a post", () =>
createPost(postBody)
)
.softTapIt("softTapIt failures don't skip following tests", () => {
throw new Error("fake-failure in softTapIt");
})
// "tap" tests: ignores the test's return value. Instead it passes lastTestValue through.
// skipped: if neither this nor any dependent tests are selected by test framework
.tapIt("lets Alice create a comment", (post, alice) =>
createComment(post.id, commentBody)
)
.tapIt("tapIt or thenIt failures WILL skip remaining tests", () => {
throw new Error("fake-failure in tapIt");
})
.thenIt("can get the created comment from the post", (post, alice) =>
getComments(post.id)
)
// In "softTap" tests, the test's return value is ignored.
// Instead it passes lastTestValue through to the next test.
// skipped: if not selected by test framework
.softTapIt("should have only one comment by Alice", (comments, alice) => {
expect(comments.length).toEqual(1);
expect(comments[0].userId).toEqual(alice.id);
})
.tapIt("should be able to logOut", logOut)
If one of these tests fail, the rest cannot succeed. Logically, they need to be skipped. However, we don't want to report them as "passed" since they were never run. Likewise, we don't want to report them as "failed" since they didn't actually fail.
What we need is a 3rd option - report the test as "skipped" (or "pending" as Mocha does). Note that the first failure will be reported as a failure, so tests will properly fail and CI/CD will fail. However, without "skip" there is no correct way to report what happened in Jest. i.e. There is no correct way to report the rest of the tests were not run.
In Mocha this works beautifully:
âś“ Alice's user story
âś“ needs to create a post
1) softTapIt failures don't skip following tests
âś“ lets Alice create a comment
2) tapIt or thenIt failures WILL skip remaining tests
- can get the created comment from the post
- should have only one comment by Alice
- should be able to logOut
3 passing (11ms)
3 pending
2 failing
1) softTapIt failures don't skip following tests:
Error: fake-failure in softTapIt
at ChainedTest.test (test/api.js:23:11)
2) tapIt or thenIt failures WILL skip remaining tests:
Error: fake-failure in tapIt
at ChainedTest.test (test/api.js:33:11)
Thank you @wolever for championing this feature-request. It's important, and I hope it makes it into Jest soon. Cheers!
I think the chances of this ever being added to jest are extremely low. I've had a PR out for months now and we haven't even gotten any feedback on this from jest maintainers. If you need this I'd suggest you look for jest alternatives.
Most helpful comment
I created a proof of concept PR (#9944) that demonstrates how easy this is to implement in jest-circus. I think it is well worth adding this functionality to jest-circus as it allows to implement custom bailing behaviors (#6527) or stepped tests (#8387) in user land easily.