In my test suite, I normally ran all my tests and received some code coverage:

But now I updated npm and node, now my code coverage looks like this:

Visit the following repository: https://gitlab.com/jvanderen1/testing-react-components/-/tree/master
Then run the following commands:
npm install
npm run test:coverage
System:
OS: macOS 10.15.2
CPU: (8) x64 Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
Binaries:
Node: 12.13.1 - /usr/local/bin/node
npm: 6.12.1 - /usr/local/bin/npm
npmPackages:
jest: ^24.9.0 => 24.9.0
@jeysal Anymore update's on this issue?
Don't know more than you, sorry. I get lots of coverage on Node 12.13.1 so that doesn't seem broken in general.
I might be seeing a similar/related problem. My Jest config specifies a coverageThreshold, and my project suddenly went from meeting all thresholds to not -- despite no change in my project since the last (passing on CI) commit to master.
But, I too updated npm recently, to 6.13.6. So, unlike the OP I do get non-zero coverage results, but they are different (probably invalid) results.
I'd like to chime in, as in a newly implemented project I'm getting 0% coverage, and an empty lcov.info file and reports.
configuration is of the likes:
module.exports = {
setupFiles: ['<rootDir>/jest.setup.js'],
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/$1',
'^~/(.*)$': '<rootDir>/$1',
'^vue$': 'vue/dist/vue.common.js',
},
moduleFileExtensions: ['js', 'vue', 'json'],
transform: {
'^.+\\.js$': 'babel-jest',
'.*\\.(vue)$': 'vue-jest',
},
collectCoverage: true,
collectCoverageFrom: [
'**/components/**/*.vue',
],
reporters: ['default', 'jest-junit'],
}
Unfortunately I have no previous state to compare it to, but other project with jest 24.9.0 coverage reports is correct (and has basically the same config).
they both are on node 12.14.1 and npm 6.13.4.
Update: Turns out the workaround I found was for a regression in Jest, so definitely not relevant. Please ignore this "solution". See https://github.com/facebook/jest/pull/9724
Just wanted to share that I was running into this issue as well in a project I work on. In my case, the issue was introduced by adding a transform key to the jest.config.js file.
The key looked like this:
transform: {
'^.+\\.(ts|tsx)$': 'ts-jest'
},
But the project was actually all still just comprised of plain .js files that required no transformation. Removing that key completely, or adding js to the list of transform file extensions resolved the issue. The project in question is part of a monorepo that is mostly all Typescript files and there is a shared jest config file.
There is a note in the Jest docs that eludes to this behavior, but I'm not sure if it really applies in our case because we have never explicitly used `babel-jest':
Note: if you are using the babel-jest transformer and want to use an additional code preprocessor, keep in mind that when "transform" is overwritten in any way the babel-jest is not loaded automatically anymore. If you want to use it to compile JavaScript code it has to be explicitly defined. See babel-jest plugin
What you have in transform shouldn't make it so you don't have coverage.
This issue needs a reproduction though, not just "it doesn't work"
@SimenB See reproduction steps above
Seems to be an issue with watch mode, doing npm run test:coverage -- --watchAll works.

You say you updated node and npm then it broke - do you know what versions?
Interesting. The command npm run test:coverage used to work and produced what you see. Now, the command makes:

Here are my most current versions:
node: v13.12.0
npm: 6.14.4
@SimenB Turns out the workaround I found was for a regression that was fixed in #9724. My issue was resolved by updating to Jest to 25.2.4. Thanks for the message though!
First off, I want to congratulate and thank everyone working on/with jest. It's really cool to see so much positivity and energy into something so rightfully popular. And everyone is just so dang nice!
TL;DR: My comment isn't very useful. I'm looking for the right setup to get the best performance for local (watched, filtered, and full runs) and continuous integration environments and I think the v8 coverage provider will be the best option if I can get it to actually show the coverage.
So, my team is adopting jest and I am deep down the rabbit hole of investigating jest's performance. I have read up on using --ci --maxWorkers=2 and i'm just starting to dig into the big performance difference between running the tests with and without coverage and with babel vs v8 as the "provider".
The reason I'm commenting is because my investigation has led me here (and to https://github.com/facebook/jest/issues/9457 and https://github.com/facebook/jest/issues/9776) to find ways to mitigate or eliminate the performance issues that don't seem to be config or operator error.
Our codebase is mostly angularJS in the process of moving to react and we're trying to introduce jest as a way to make unit testing more commonplace for our developers. We use babel and webpack to build and prior to jest, we used karma and jasmine to spin up headless chrome and execute our tests with a full webpack build running in the headless browser. I've migrated all our spec files to test files and have seen mixed results with local and continuous integration (circleCI) when running jest.
We have 581 tests across 120 suites and my local results in seconds are farther below. Due to the consistency of my local results and laziness, I didn't run the tests more than once in our CI pipeline, but i saw about a 95% increase without coverage (40 seconds locally --> 78 seconds in CI) and about a 130% increase with coverage (127 seconds locally --> 320 seconds in CI)
Generally, the times are down over karma without coverage, but if I want coverage (and I do!), the conundrum I'm currently facing is either:
Currently, 2 seems more appealing because coverage isn't very important at dev time and even at PR or CI build time, the 4x increase doesn't seem worth it for every circleCI run. It seems better to just manually produce coverage reports since this is just the beginning of our change in philosophy to "write more unit tests to maintain increase code coverage and quality".
All that said, I believe if I can get the v8 coverage provider to correctly report, I think it will yield significantly better results on time.
no coverage, maxWorkers=50%
40, 40, 39, 38, 43 = avg 40
no coverage, maxWorkers=2
46, 39, 38, 41, 39 = avg 40.6
babel coverage, maxWorkers=50%
112, 97, 106, 108, 102 (86) = avg 105
babel coverage, maxWorkers=2
128, 127, 127, 127, 126 (118) = avg 127
note: 5-20 seconds to generate coverage report once tests are done running
v8 coverage, maxWorkers=50%
37, 36, 35, 35, 35 = avg 35.6
v8 coverage, maxWorkers=2
39, 39, 40, 40, 39 = avg 39.4
NOTE: v8 coverage does not currently work, i get unknown% (0/0)
here's my jest config
// For a detailed explanation regarding each configuration property, visit:
// https://jestjs.io/docs/en/configuration.html
module.exports = {
// Stop running tests after `n` failures
bail: 1,
// Automatically clear mock calls and instances between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: true,
// coverageProvider: 'v8',
// The directory where Jest should output its coverage files
coverageDirectory: 'test_output',
// A list of reporter names that Jest uses when writing coverage reports
coverageReporters: [
// "default", // json, lcov, text, clover
'json',
'html',
'text-summary',
],
// An array of directory names to be searched recursively up from the requiring module's location
moduleDirectories: [
'app',
'test/unit',
'node_modules'
],
// An array of file extensions your modules use
moduleFileExtensions: [
"js",
"json",
"jsx",
"ts",
"tsx",
],
// A map from regular expressions to module names or to arrays of module names that allow to stub out resources with a single module
moduleNameMapper: {
// top level aliases
'@hooks': 'hooks/_index',
'@models': 'models/_index',
'@selectors': 'selectors/_index',
'@services': 'services/_index',
'@utils': 'utils/_index',
// mocks for webpack
"\\.(css|scss)$": "<rootDir>/test/unit/__mocks__/styleMock.js",
"quill-mention": "<rootDir>/test/unit/__mocks__/styleMock.js"
},
// Use this configuration option to add custom reporters to Jest
// reporters: undefined,
reporters: [
'default',
['jest-junit', {
outputDirectory: 'test_output',
outputName: 'unit.xml',
}]
],
// A list of paths to directories that Jest should use to search for files in
roots: [
'app',
'test/unit',
],
// A list of paths to modules that run some code to configure or set up the testing framework before each test
setupFilesAfterEnv: ['./jest.setup.js'],
// The glob patterns Jest uses to detect test files
testMatch: [
// "**/__tests__/**/*.[jt]s?(x)",
"**/?(*.)+(test).[tj]s?(x)"
],
};
i also just ran it with jsdom 16 and it took longer and ultimately crashed at the halfway mark siiiigh.
i also made a repo that reproduces the memory leak.

@omgoshjosh Please post this as a separate issue. What you are experiencing does not seem to be related to this issue.
@jvanderen1 apologies. I think at least one open issue is this one:
https://github.com/facebook/jest/issues/7874
So, I won't make a new one, but I have linked my repo and the image above into that issue, which is where i think it's more appropriate.
For context, i started writing the original comment because i was not getting coverage (so it was related to begin with), but by the time i finished a draft of the comment, i realized (as i noted in my TL;DR) that it just wasn't a super helpful comment because the scope of my issues was much larger than just not getting coverage. So rather than scrap the results, I left the comment to record my effort/activity in case my symptoms match anyone else's more completely.
All that said, it's no secret that there are a plethora of issues around coverage, performance, and various environments and dependencies, many of which are blocked by an upstream dependency, duplicates, or related in some other way. So as someone who just started digging into this ecosystem, it's difficult to parse what is or is not being tracked, what is or is not currently working, and what workarounds are necessary for the aforementioned myriad issues (46 open for "leak" and 57 open for "performance"). Consequently, it's hard to find the right place to record the issues i'm seeing. I'm just happy that people are responsive! Even if I'm a jest noob and people give me thumbs down, lol.
And again, because I can't help myself, I'm seriously amazed and immensely grateful for the hard work and dedication by the 1000+ people involved. Like many others, I'm just hoping to find something in one of these threads that works for me, you know?