Jest: JavaScript heap out of memory after upgrade to Jest 26

Created on 5 May 2020  路  14Comments  路  Source: facebook/jest

馃悰 Bug Report

I upgraded from 24.X to 26.0.0 but now test that was passing is not
Running test takes long time to complete then I get this error
image

To Reproduce

My test:

  describe('when item ids are in sessionStorage', () => {
    const itemIds = [333, 222, 111];

    beforeEach(() => {
      parseLocationToQueries.mockImplementation(() => ({
        queue_id: testQueueId
      }));
      isAdHocReviewByItemId.mockReturnValue(false);
      isAdHocReviewByObjId.mockReturnValue(false);
      setItemsToBeReviewed(itemIds);
    });

    it('initial fetch', () => {
      const wrapper = tf.render();
      expect(wrapper.state('itemIds')).toEqual([]);
      expect(axios.post).toBeCalledWith('/review/items', { item_ids: itemIds });
    });

    it('fetch more while no more', () => {
      const wrapper = tf.render();
      axios.post.mockClear();
      wrapper.instance().fetchMoreItems();
      expect(axios.post).not.toBeCalled();
    });

    it('fetch more while more', () => {
      const wrapper = tf.render();
      axios.post.mockClear();
      wrapper.setState({ itemIds: [555] });
      wrapper.instance().fetchMoreItems();
      expect(axios.post).toBeCalledWith('/review/items', { item_ids: [555] });
    });
  });

code:

export function setItemsToBeReviewed(itemIds) {
  sessionStorage.setItem(ITEMS_TO_BE_REVIEWED_KEY, JSON.stringify(itemIds));
}


  fetchMoreItems = () => {
    this.setState({ loadingMoreItems: true });
    return this.fetchItems(true)
      .then(res => {
        this.loadData(res.data);
      })
      .catch(error => {
        console.log('FetchmoreError', error);
      });
  };

  fetchItems = (excludeAssigned: boolean = false) => {
    let request;
    if (this.state.itemIds) {
      request = this.fetchItemsByIds();
    } else {
      request = this.fetchItemsFIFO(excludeAssigned);
    }
    return request;
  };

  fetchItemsFIFO = (excludeAssigned: boolean = false) => {
    const { isAlignment, queueIdFromURL } = this.state;
    const url = '/review/assign';
    const params = {
      alignment: isAlignment,
      queue_id: queueIdFromURL,
      exclude_assigned: excludeAssigned
    };
    return axios.get<any>(url, { params });
  };

  fetchItemsByIds = () => {
    if (_.isEmpty(this.state.itemIds)) {
      return Promise.resolve({ data: [] });
    }
    const url = '/review/items';
    const data = {
      item_ids: _.slice(this.state.itemIds, 0, FETCH_BATCH_SIZE)
    };
    this.setState(state => ({
      itemIds: _.slice(state.itemIds, FETCH_BATCH_SIZE)
    }));
    return axios.post<any, any>(url, data);
  };

jest.config:

module.exports = {
  timers: 'fake',
  moduleDirectories: ['node_modules'],
  moduleFileExtensions: ['js', 'jsx'],
  moduleNameMapper: {
    '\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$':
      '<rootDir>/__mocks__/fileMock.js',
    '\\.(css|less)$': '<rootDir>/__mocks__/styleMock.js',
    '^Root(.*)$': '<rootDir>$1',
    '^Utils(.*)$': '<rootDir>/src/utils$1',
    '^Hoc(.*)$': '<rootDir>/src/hoc$1',
    '^Components(.*)$': '<rootDir>/src/components$1'
  },
  testRegex: 'test\\.jsx?$',
  testURL: 'http://localhost:3000',
  collectCoverageFrom: [
    'src/**/*.js',
    'src/**/*.jsx',
    '!**/node_modules/**',
    '!src/components/bulk_review/columns/**',
    '!src/components/v2/**'
  ],
  coverageReporters: ['html', 'text'],
  coverageThreshold: {
    global: {
      branches: 90,
      functions: 90,
      lines: 90,
      statements: 90
    }
  },
  coverageDirectory: 'coverage',
  snapshotSerializers: ['enzyme-to-json/serializer'],
  testEnvironment: '<rootDir>/jest-environment.js',
  setupFilesAfterEnv: ['<rootDir>/enzyme.setup.js'],
  setupFiles: [
    '<rootDir>/__mocks__/localStorageMock.js',
    '<rootDir>/__mocks__/consoleMock.js'
  ],
  globals: {
    ENVIRONMENT: 'TESTING'
  },
  testPathIgnorePatterns: ['<rootDir>/src/components/v2'],
  reporters: [
    'default',
    [
      'jest-html-reporter',
      {
        pageTitle: 'Test Report',
        statusIgnoreFilter: 'passed',
        includeFailureMsg: 'true'
      }
    ]
  ]
};

envinfo

System:
OS: Linux 4.15 Ubuntu 18.04.4 LTS (Bionic Beaver)
CPU: (36) x64 Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz
Binaries:
Node: 14.1.0 - ~/.nvm/versions/node/v14.1.0/bin/node
Yarn: 1.22.4 - /usr/bin/yarn
npm: 6.14.4 - ~/.nvm/versions/node/v14.1.0/bin/npm
npmPackages:
jest: ^26.0.0 => 26.0.0

Bug Report Needs Repro Needs Triage

Most helpful comment

After doing some research, it seems this memory leak has been an ongoing issue since 2019 (Jest 22) so wanted to consolidate some notes here for posterity. Past issues have been related to graceful-fs and I think some have solved it via a hack/workaround that removes graceful-fs and then re-adds graceful-js after running jest. One troubleshooting thread was looking at compileFunction in the vm package as a potential cause. It seems that jest, webpack-dev-server, babel, and create-react-app are using graceful-js as a dependency. The memory leak issue was supposed to be fixed in a newer release of Jest but there may have been a regression since it is popping up again. I can confirm everything was working fine until a substantial amount of Jest tests were created in our environment and then the heap overflows on our CI machine after the heap size grows larger than the allocated memory due to the leak. I've tried using 1 worker, runInBand, etc. without success.

The common cause of the issues I've seen is collecting coverage via collecting coverage and graceful-fs. I haven't done an in-depth analysis of those issues but seeing that they are both filesystem-related and having solved my own issue which was related to file imports I suspect they are some version of the same issue I was having.

Wanted to provide the solution I found so others may reap benefits:

The cause:

Using imports of the format import * from 'whatever'

The solution:

Using the format import { whatINeed } from 'whatever' instead dramatically reduced the memory accumulation

All 14 comments

We will need a repro that can be downloaded and analyzed.
Also, please make sure to clear cache just in case, e.g with jest --clear-cache

Oh --clear-cache fixed it.

Thanks, that's good to know. Still weird

I spoke too soon, it seems like the issue is this helper function:

export function setItemsToBeReviewed(itemIds) {
  sessionStorage.setItem(ITEMS_TO_BE_REVIEWED_KEY, JSON.stringify(itemIds));
}

We will need a repro that can be downloaded and analyzed.

This is still the case 馃檪

Also sounds like JSDOM leaking

Not sure if it is related. But I get heap leak for simple expect:

  let items = tree.root.findAllByProps({ testID: 'CrewItem.Employee' })

  expect(items).toHaveLength(8) // stacked and throws leak in 30-60 seconds
  expect(items.length).toEqual(8) // works ok

Clearing cache doesn't help

I am facing similar issues

Same issue here as well. (using ts-jest)

I got it during a full run in which some tests failed. I spent some time debugging and taking memory snapshots and comparing.. I couldn鈥檛 find any leaks.
I ran it with inspect in watch mode, run in band, took a snapshot after the first run, then ran again and took another. Is that the best way to find leaks?

I think I'm running into the same issue. Created a new app recently with Jest 26. Using Enzyme for snapshot testing. Updated a test to use mount instead of shallow and now it gets out of memory errors everytime I run it even if it's the only test running. Node's out there using something like 1.5GB. This is with or without coverage and I've tried clearing cache as well. I can provide my repo as an example if needed.

I posted an issue to Enzyme https://github.com/enzymejs/enzyme/issues/2405#issuecomment-646957124

Below is the error I get on this test

Test suite failed to run

    Call retries were exceeded

      at ChildProcessWorker.initialize (node_modules/jest-runner/node_modules/jest-worker/build/workers/ChildProcessWorker.js:191:21)


<--- Last few GCs --->

[3466:0x39d1050]    32366 ms: Mark-sweep 1390.7 (1425.4) -> 1390.2 (1425.9) MB, 714.3 / 0.0 ms  (average mu = 0.110, current mu = 0.013) allocation failure scavenge might not succeed
[3466:0x39d1050]    33470 ms: Mark-sweep 1391.0 (1425.9) -> 1390.5 (1426.4) MB, 1091.8 / 0.0 ms  (average mu = 0.053, current mu = 0.010) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x23bdb465be1d]
    1: StubFrame [pc: 0x23bdb465d1df]
Security context: 0x1e8e53f9e6c1 <JSObject>
    2: printBasicValue(aka printBasicValue) [0x2c6a1c7d28e1] [<root>/node_modules/jest-snapshot/node_modules/pretty-format/build/index.js:~108] [pc=0x23bdb4dcdac1](this=0x00329d2826f1 <undefined>,val=0x3125160c22e1 <String[14]: onSubMenuClick>,printFunctionName=0x00329...

I tried removing random test suites from my tests but still jest memory leaks. So there is no particular test causing the leak.

I had a similar problem where I used to run into Out of Memoryerror when Jest started to do coverage on "untested files".
Using v8 as coverage provider solved the issue for me.
However, its an experimental feature (as per documentation) -
https://jestjs.io/blog/2020/01/21/jest-25#v8-code-coverage

After doing some research, it seems this memory leak has been an ongoing issue since 2019 (Jest 22) so wanted to consolidate some notes here for posterity. Past issues have been related to graceful-fs and I think some have solved it via a hack/workaround that removes graceful-fs and then re-adds graceful-js after running jest. One troubleshooting thread was looking at compileFunction in the vm package as a potential cause. It seems that jest, webpack-dev-server, babel, and create-react-app are using graceful-js as a dependency. The memory leak issue was supposed to be fixed in a newer release of Jest but there may have been a regression since it is popping up again. I can confirm everything was working fine until a substantial amount of Jest tests were created in our environment and then the heap overflows on our CI machine after the heap size grows larger than the allocated memory due to the leak. I've tried using 1 worker, runInBand, etc. without success.

The common cause of the issues I've seen is collecting coverage via collecting coverage and graceful-fs. I haven't done an in-depth analysis of those issues but seeing that they are both filesystem-related and having solved my own issue which was related to file imports I suspect they are some version of the same issue I was having.

Wanted to provide the solution I found so others may reap benefits:

The cause:

Using imports of the format import * from 'whatever'

The solution:

Using the format import { whatINeed } from 'whatever' instead dramatically reduced the memory accumulation

Was this page helpful?
0 / 5 - 0 ratings