Jest: Feature: Purposefully failing test (test.failing)

Created on 8 Oct 2017  路  4Comments  路  Source: facebook/jest


Do you want to request a feature or report a bug?
Feature

What is the current behavior?
As discussed in #1605, ava has a way of marking a test purposefully failing. I'd like to replicate that feature in Jest. The other features mentioned in #1605 has been implemented for more than a year, so I'm making a new issue to track test.failing.

I'll try to find the time to implement this the coming week.

If the current behavior is a bug, please provide the steps to reproduce and either a repl.it demo through https://repl.it/languages/jest or a minimal repository on GitHub that we can yarn install and yarn test.
N/A

What is the expected behavior?
I should be able to have a test that is failing on purpose (due to a bug or a non-implemented feature) that only fails the test suite if the test passes.

Link to Ava's docs on the feature: https://github.com/avajs/ava/blob/42e7c74c46756c441fe33ecdcbb76ac210e422ea/readme.md#failing-tests

Please provide your exact Jest configuration and mention your Jest, node, yarn/npm version and operating system.
N/A

New API proposal

Most helpful comment

Would be great if the default reporter output the number of intentionally failing tests similar to how we do .todo now 馃憤

All 4 comments

test.failing() is a very useful feature.
For example, you encounter a bug in which the root cause is from a dependency written by another team.
Marking the test as failing() allows you to track these tests to make sure they are all taken cared of before release.

Another example is you encounter a bug that you cannot fix at the moment. Instead of skipping it, marking it as failing will able to keep track of it and make sure they will be fixed asap.

test.skip() only indicate the outcome (you skipped a test), but does not carry any semantic meaning (i.e. why are you skipping. Is it a todo, a known issue, something you will be implementing in the next release, etc).

Just wanted to comment to maybe bring back some attention to his feature. I myself find it extremely useful, especially for documentation purposes and I think it wouldn't hurt for Jest to have it. I miss it when switching from an AVA project!

Would be great if the default reporter output the number of intentionally failing tests similar to how we do .todo now 馃憤

I added my vote on this feature, which is also available on RSpec (https://relishapp.com/rspec/rspec-core/v/3-8/docs/pending-and-skipped-examples/pending-examples).

One additional use case would be in a Test-Commit-Revert (TCR) context: I might want to write the test first and not implement the feature yet, so I mark it as failing (or pending with the RSpec syntax). This would allow me to run the TCR loop and verify that my test is indeed failing as I expect, then I can implement the feature and remove the failing to then run again the TCR loop and see that my test does indeed pass.

Was this page helpful?
0 / 5 - 0 ratings