Mocha supports skipping tests programmatically (in both before
and it
) as:
describe("Some test" () => {
it("Should skip", function () {
if (true) {
this.skip(); // test marked as skipped, no further part run
}
notInvoked();
});
}):
It's very useful for cases where during tests setup we find out whether test can be pursued or not e.g. we need some external data, but due to some unavailability we can't so we decide to skip tests.
Is this somewhere on a roadmap?
This seems like a bad idea to me. Currently you can do it.skip() to explicitly skip a particular test and it's not even executed.
Skipping programmatically, and only running a portion of your test suite as a result, doesn't seem like it's serving the purpose of tests. A test failure at that point would be beneficial so the problem(s) could be fixed. And if they can't be fixed then marking a test as skipped explicitly like I've showed above is an appropriate reaction.
Skipping programmatically, and only running a portion of your test suite as a result, doesn't seem like it's serving the purpose of tests. A test failure at that point would be beneficial so the problem(s) could be fixed
It serves integration tests. Where tests depend on some external factors, and unavailability of some external resource, shouldn't indicate a problem with a project (reported with fail) but fact that test cannot be pursued (hence skipped)
I'll leave this here for others to discuss. But personally I don't think this is a great idea.
Your hypothetical example would not give confidence that any given change in a code base caused problems, thus why I said a test failure is beneficial.
Your hypothetical example would not give confidence that any given change in a code base caused problems, thus why I said a test failure is beneficial.
Yes, it's not for a case, where we want to confirm our project is free of bugs on its own (that should be solved with unit tests or mocked integration tests).
It's about the case, where we test integration with external resource (such tests might be run on different schedule). Having fails for both resource unavailability and an errors in handling of it, makes tests unreliable, as it produces false positives and in result increases the risk of ignoring the latter type of the issue.
For that case you could consider using jest.retryTimes
For that case you could consider using jest.retryTimes
I don't want to retry, I want to abort without fail (and also signal that test was not really run)
@medikoo I agree with @palmerj3
I think being able to dynamically disable tests kind of misses the point of testing.
Instead of disabling because of unavailability of some resource I would argue that you probably want to be alerted to this with the failure and then resolve the real problem of the resource not being available.
jest.retryTimes
should help with this if it's just a case of the resource being flakey but if it is completely down then you have a bigger problem IMO and want to know about it :smile:
then resolve the real problem of the resource not being available.
When speaking of external resource I mean something I do not own or control, so it's not an issue I can resolve.
And uptime monitoring of external resources which alarms/informs me whether given service is accessible or not, is a different thing which I don't see as a part of integration tests.
This is part of jasmine (pending
global function), but I think it was an explicit choice not to port it to Circus.
@aaronabramov might have thoughts on that?
what if the external source starts failing all the time? then you'll just have a skipped test that will never run.
i think for very specific scenarios you can just use:
test('stuff', () => {
if (serviceUnavailable) {
logSomething();
return;
}
// else test something
});
but i agree with @palmerj3 that having this in the core doesn't look like a great idea
@aaronabramov it's what we do now (return and log), still as we have large number of tests, those logs usually come unnoticed.
If they would be skipped, then in the final summary any skip if happened will be noticed.
This is a pretty common use case. Sometimes writing tests is hard and takes a long time to do it correctly. Sometimes writing a test to work only in certain circumstances is achievable in far less time and better than writing no tests at all or permanently skipping tests.
So for all the devs with deadlines, here is a hacky workaround:
let someTestName = 'some test';
let someTestCB = () => {
it("Should skip", function () {
notInvoked();
});
};
if ( process.env['RUN_ALL_TESTS'] == 'yes' ) describe(someTestName, someTestCB);
else describe.skip(someTestName, someTestCB);
Another possibility
if (thing) test.only('skipping all other things', () => {
console.warn('skipping tests');
});
// ...all other things
Jest itself uses this to skip certain tests on windows
To weigh in on this: this is already a feature supported by Mocha which is especially useful in beforeAll
blocks when a precondition is being checked. For example, there are a number of tests in my suite which should only run if an external service is available, and with Mocha this is trivial:
describe('External service tests', () => {
before(async function() {
try {
await fetch('https://external-service')
} catch (e) {
console.log('external service not running; skipping tests:', e)
this.skip()
}
})
… the rest of the suite …
})
Based on the responses here (and elsewhere on Google), the only options for this kind of test are:
And to address a couple of the common issues that have been raised with this kind of testing:
But what if the tests never run because the service is always down?
This is a business decision: I've decided that the cost of "test suite fails every time $service goes down" is higher than the cost of "certain portions of the test suite are not exercised until someone responds to the pagerduty and fixes the broken service".
Why not retry until the service comes back?
Jest does not allow you to skip a test once it's begun, but you can always extract that logic and conditionally use it
or it.skip
to define your problematic tests. This is arguably a better practice than inline skipping since this results in consistent enabling/disabling of related tests. (I suppose it's clunky if you only have a single test though.)
For example:
const {USER, PASSWORD} = process.env
const withAuth = USERNAME && PASSWORD ? it : it.skip
if (withAuth == it.skip) {
console.warn('USERNAME or PASSWORD is undefined, skipping authed tests')
}
withAuth('do stuff with USERNAME and PASSWORD', () => {
// ...
})
What if you want to skip certain tests on certain OSes? That seems like a pretty valid reason for programmatically skipping tests.
@kaiyoma see above
Another possibility
if (thing) test.only('skipping all other things', () => { console.warn('skipping tests'); }); // ...all other things
Jest itself uses this to skip certain tests on windows
describe('something wonderful I imagine', () => {
it('can set up properly', () => {
setUp()
})
it('can do something after setup', () => {
skipIf('can set up properly').failed()
setUp()
doSomethingThatDependsOnSetUpHavingWorked()
})
})
Idea being that I want _one_ test to fail telling me exactly "hey dingus, you broke this part", not one test, and all others that depend on whatever its testing to go well.
Basically I want dependency in js.
I agree with @wolever 100%.
@palmerj3 and @aaronabramov: Your reasoning for not providing this feature is predicated on a false assumptions about the business need of our test application. Your assumptions are understandable in the context of application self-testing but for external resource test, the model breaks down fast.
My use case for conditionally skipping tests is when the resource is only available during certain times of the day/week. For example, testing the API consistency of a live stock market data service doesn't make sense on weekends, so those tests should be skipped.
Yes, I assume the risk that the API response format changed over the weekend, but that's a business decision, as others have mentioned.
@okorz001's withAuth
workaround is nifty, but breaks IDEs. VS Code, WebStorm etc. won't recognize withAuth
as a test, and won't enable individual test running and status:
I'm assuming you have a very good reason to not mock the API calls in tests, so I won't ask.
Can't you just perform your check within the tests?
const hasAuth => USER && PASSWORD
describe('something wonderful, () => {
it('does something with auth', () => {
if (!hasAuth) { it.skip() }
// ...
})
})
describe('auth tests', () => {
if (!(USER && PASSWORD)) {
it.only('', () => {
console.warn('Missing username and password, skipping auth tests');
});
}
// actual auth tests
});
// all non-auth tests
You could have a helper as well, sort of like invariant
.
import {skipTestOnCondition} from './my-test-helpers'
describe('auth tests', () => {
skipTestOnCondition(!(USER && PASSWORD));
// actual auth tests
});
// all non-auth tests
If you don't like describe
blocks, just split the tests out into multiple files and have the check at the top level.
Again, Jest does something very similar: https://github.com/facebook/jest/blob/3f5a0e8cdef4983813029e511d691f8d8d7b15e2/packages/test-utils/src/ConditionalTest.ts
I don't think we need to increase the API surface of Jest for this, when it's trivial to implement in user land
I don't think we need to increase the API surface of Jest for this, when it's trivial to implement in user land
I do still want this, which isn't trivial in userland:
https://github.com/facebook/jest/issues/7245#issuecomment-491931060
Seems also like something you can use --bail
to achieve - it won't execute tests after a failing one (in a single file). That's also an entirely different feature request than what I interpret this issue to be about - you want access to other test's state from within a test.
It seems like there are two streams to the discussion here:
The issues @SimenB and others are raising about _synchronously_ skipping tests (ex, using if (condition) it('does some test', () => { … })
)
The issues @medikoo, @dandv, and I are raising about _asynchronously_ skipping tests
And we may be talking over each other, because the two are fundamentally separate issues.
To hopefully avoid this, I've opened a separate issue for asynchronously skipping tests: https://github.com/facebook/jest/issues/8604
And we may be talking over each other, because the two are fundamentally separate issues.
Exactly, use case I was trying to explain, is to be able to gently abort when we're in a middle of test run, after external resource started to become not reliable.
It might be that at definition phase external resource appears reliable, or that it had been for most of test run, but suddenly starts to fail.
When we test asynchronously, fact that something is accessible when we start test run doesn't mean it'll be few seconds later.
This issue is quite common, when you're forced to depend on resources with poor reliability (e.g. I experienced that when working on integration with some transport service providers)
I have the same issue here, after reaching a point of my code I want to exit of the test but I dont know if I can continue or I cant before reaching that point.
In our case, we have different teams working on different aspects of the pipeline. We use ENV flags to decide if a service is available for integration testing. For teams to work independently, we wanted to see if we could run full test suites base on a given ENV set up in the pipeline or skip it altogether.
Using your suggested approach above @medikoo , it would mean a member from team X would have to go back and touch code when team Y completes their service.
If the mocked specifications worked well and were well tested in the first place, there shouldn't be a need to do this at all.
Please consider this use case.
Please reconsider this feature request:
Unit testing frameworks are used not only for unit testing. They are also valuable in integration testing.
Obviously not all tests are suitable for all environments. jest already provides skip() as a way to "comment out" parts of the test suit without having to do the actual cumbersome commenting out of the code. Having a predicate in the skip method would enable switching on and off parts of the test suite to suit the environment (windows vs. unix, local dev vs. build server, etc.) the tests are running on less cumbersome.
@elialgranti There has been more discussion on this (and it seems like core devs are in favour of if) in https://github.com/facebook/jest/issues/8604
@SimenB: I've tried the it.only
structure you suggested for synchronous test skipping but it's failing the non-auth tests outside the describe
as well:
https://repl.it/repls/SillyPastDictionary
Filed #9014.
pytest permits to skip test programmatically , because of the missing of preconditions.
if a test cannot pass because of missing of preconditions, it will give a false positive.
for example, i need to test pressing a button my lamp switch on.
if there is no electric power, the test need to skip, otherwise you will obtain a false positive.
there will be another test somewhere that test there is electrical power.
now in jest i need to avoid the assertions that cannot success.
test("something", async () => {
const precondition = something
if(precondition) {
do stuff
}
})
but this is a boilerplate.
Another reason to have a feature for this: We have some VERY long running tests on sections of a system that doesn't change much. We want to keep these running on CI, where we don't care if they take a long time, but devs shouldn't have to worry about them while developing. Many other test runners have ways of classifying tests (small, big, ci, etc...) and then you can pick which tests you want to run while developing.
Another use case is ability to run some tests _only_ in CI.
E.g. I have some canary integration tests that run against a production system, using secrets, which are stored in CI only. Developers, including open source devs, simply just don't have access to these keys. So the tests would fail on their systems.
Another use-case similar to @moltar's is when a server may have or not-have the capability to run a particular test. For example, I'm writing tests to verify that Daylight Savings Time is handled correctly, but if it's run on a local timezone without DST (which I can detect programmatically) then I want to skip the test but I want to let users know that the test is skipped.
Here's how I'm doing it now, which seems to be working pretty well.
const tz = Intl.DateTimeFormat().resolvedOptions().timeZone || process.env.tz
const dstTransitions = getDstTransitions(2020)
const dstOnly = dstTransitions.start && dstTransitions.end ? it : it.skip
dstOnly(`works across DST start & end in local timezone: ${tz}`, function() {
. . .
https://github.com/facebook/jest/issues/3652#issuecomment-385262455
const testIfCondition = mySkipCondition ? test.skip : test;
describe('Conditional Test', () => {
testIfCondition('Only test on condition', () => {
...
});
});
We ended up doing something like this:
function describe( name, callback ) {
if ( name.toLowerCase().includes( "bluetooth" ) && BLUETOOTH_IS_DISABLED )
return this.describe.skip( name, callback );
else
return this.describe( name, callback );
}
Not perfect but works well and is unobtrusive. This prevents the usage of describe.each
but I'm happy to get feedback on how to make this function handle those situations as well.
my use case for this is tests relying on external services that we have no control over, obviously some part of the test should be mocked, but it would also be good to actually test the request to the service.
Was this ever resolved ?
I only want to run integration tests against service Foo when I've started service Foo and indicated it to my tests with $FOO_PORT
. Every single other test framework makes that very convenient.
Has anyone successfully gotten tests to skip based on an async condition? I have found that jest parses tests before my async condition is resolved even with various assiduous logic in beforeAll
or setupFile
. This seems like a trivial task but for some reason it's actually hard. This is what I did to check if $FOO_PORT
was in use and run test conditionally.
I'd still like to see this feature in core.
Most helpful comment
It serves integration tests. Where tests depend on some external factors, and unavailability of some external resource, shouldn't indicate a problem with a project (reported with fail) but fact that test cannot be pursued (hence skipped)