Each time the refresh endpoint is hit, memory increases a little bit and it doesn't go down, eventually leading to a heap out of memory after multiple refreshes.
develop process memory previous to a refresh:

develop process memory after a refresh:

and hitting more times refresh it just goes higher.
adding --max_old_space_size to a high value just delays the out of memory.
gatsby develop
hit refresh endpoint
wait until refresh finishes
hit refresh endpoint
...
Memory should not increase without getting back to normal over time.
Eventually there is a memory leak
System:
OS: Windows 10
CPU: (12) x64 Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
Binaries:
Yarn: 1.21.0 - C:UsersapaAppDataRoamingnpmyarn.CMD
npm: 6.14.4 - C:Program Filesnodejsnpm.CMD
Languages:
Python: 3.8.3 - /c/Python38/python
Browsers:
Edge: 44.17763.831.0
npmPackages:
gatsby: 2.24.2 => 2.24.2
gatsby-graphiql-explorer: ^0.2.31 => 0.2.31
gatsby-plugin-react-decorators: 0.0.4 => 0.0.4
gatsby-plugin-react-helmet: ^3.1.18 => 3.1.18
gatsby-plugin-sass: ^2.1.26 => 2.1.26
gatsby-plugin-sharp: ^2.3.9 => 2.3.9
gatsby-plugin-typescript: ^2.1.23 => 2.1.23
gatsby-source-contentful: ^2.3.15 => 2.3.15
gatsby-source-filesystem: ^2.1.42 => 2.1.42
gatsby-transformer-json: ^2.2.22 => 2.2.22
gatsby-transformer-sharp: ^2.3.9 => 2.3.9
Hi @apaniel90vp!
Sorry to hear you're running into an issue. To help us best begin debugging the underlying cause, it is incredibly helpful if you're able to create a minimal reproduction either in a public repo or on something like codesandbox. This is a simplified example of the issue that makes it clear and obvious what the issue is and how we can begin to debug it.
If you're up for it, we'd very much appreciate if you could provide a minimal reproduction and we'll be able to take another look.
Thanks for using Gatsby! 馃挏
@apaniel90vp It's more likely an issue with gatsby-source-contentful than with gatsby refresh endpoint. Do you see the same behaviour without contentful?
@wardpeet that is a very good point, probably without contentful plugin it doesn't happen, as I can see the highest memory peak while generating the schema. I won't be able to check this today, but I will do tomorrow! Thanks!
I can confirm that I saw this happening as well.
Any help to identify where it is coming from is very welcome :)
Has anyone identified the root issues with this? I've been having issues this week with memory. Nothing major has changed, so the sudden error is driving me crazy :/
When running gatsby build with more memory it stalls at building schema
node --max-old-space-size=5120 node_modules/.bin/gatsby build --verboseNot sure what the best way to debug it further.
...
in further debugging. Removed all plugins except two entries for the contentful sources. Current error...
<--- Last few GCs --->
[79604:0x102d59000] 175063 ms: Mark-sweep 2044.9 (2052.8) -> 2044.0 (2057.3) MB, 147.2 / 0.0 ms (average mu = 0.115, current mu = 0.048) allocation failure scavenge might not succeed
[79604:0x102d59000] 175204 ms: Mark-sweep 2045.2 (2057.3) -> 2044.4 (2051.3) MB, 44.6 / 0.0 ms (+ 70.5 ms in 15 steps since start of marking, biggest step 39.2 ms, walltime since start of marking 142 ms) (average mu = 0.149, current mu = 0.187) finaliz
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x271753bc08d1 <JSObject>
0: builtin exit frame: stringify(this=0x271753bdee79 <Object map = 0x271797183639>,0x271726c004b1 <undefined>,0x2717325c1cc9 <JSFunction (sfi = 0x271740416209)>,0x271776fd3c29 <Object map = 0x2717971a82a9>,0x271753bdee79 <Object map = 0x271797183639>)
1: stringify [0x2717b4275081] [/Users/am/Work/mm/node_modules/json-stringify-safe/stringify.js:~4] [pc=0x209a57f2ceaa](this=...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x1011c96b5 node::Abort() (.cold.1) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
2: 0x10009cae9 node::Abort() [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
3: 0x10009cc4f node::OnFatalError(char const*, char const*) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
4: 0x1001ddbc7 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
5: 0x1001ddb67 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
6: 0x100365a65 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
7: 0x1003672da v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
8: 0x100363d0c v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
9: 0x100361b0e v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
10: 0x10036d9da v8::internal::Heap::AllocateRawWithLightRetry(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
11: 0x10036da61 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
12: 0x10033d8eb v8::internal::Factory::NewRawTwoByteString(int, v8::internal::AllocationType) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
13: 0x100709c79 v8::internal::IncrementalStringBuilder::Extend() [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
14: 0x10046e97a v8::internal::JsonStringifier::SerializeString(v8::internal::Handle<v8::internal::String>) [/Users/am/.nvm/versions/node/v12.18.0/bin/node]
@AnalogMemory do you use Rich Text? There is a bug in the current version of the source plugin which blows up your memory as soon you have Rich Text and some content referenced with a circular reference. The canary version at #25249 does fix this issue
@axe312ger Yeah I finally figured out it was a combo of
One of my content editors started using more of them just before it happened. Seems to be fine with a few, but when over a 100 entries were linking to each other it was killing the memory when gatsby-contentful-source was trying to build the queries.
I tried the gatsby-contentful-source@next version but it would have been more work to rewrite things and didn't want to push it out to production in that state.
Ended up finding a patch that @disintegrator posted and using that to hold me over till a proper release is ready :)
https://github.com/gatsbyjs/gatsby/issues/24221#issuecomment-665198654
I do have a task to test out the next version when I'm back from vacation next week
Thanks!
@AnalogMemory Please continue discussion in #24221 as this ticket originally is about a memory leak bug in the browser while developing, not when sourcing the nodes on bootstrap.
We're struggling with this issue as well. We are using the refresh endpoint to enable preview functionality for our content authors, so it is called frequently. Each call incrementally increases memory consumption until the application crashes.
Docker running node:12.18.3-alpine3.11
"gatsby": "^2.23.3",
"gatsby-image": "^2.2.42",
"gatsby-plugin-compile-es6-packages": "^2.1.0",
"gatsby-plugin-create-client-paths": "^2.1.22",
"gatsby-plugin-env-variables": "^1.0.1",
"gatsby-plugin-google-tagmanager": "^2.1.25",
"gatsby-plugin-manifest": "^2.2.42",
"gatsby-plugin-material-ui": "^2.1.9",
"gatsby-plugin-offline": "^3.0.35",
"gatsby-plugin-react-helmet": "^3.1.22",
"gatsby-plugin-react-svg": "^3.0.0",
"gatsby-plugin-robots-txt": "^1.5.0",
"gatsby-plugin-sharp": "^2.4.5",
"gatsby-plugin-sitemap": "^2.4.13",
"gatsby-plugin-svgr-loader": "^0.1.0",
"gatsby-plugin-typegen": "^1.1.2",
"gatsby-plugin-typescript": "^2.2.0",
"gatsby-plugin-web-font-loader": "^1.0.4",
"gatsby-source-contentful": "^2.3.35-next.63",
"gatsby-source-filesystem": "^2.1.48",
"gatsby-transformer-sharp": "^2.3.16",
Did anybody yet figure out what data is bloating the memory? Or which call is causing it? Any help in research is very much appreciated :)
I experience the same behaviour with WordPress using gatsby-source-graphq.
Can you try [email protected]? A potential improvement was shipped in #27685 (won't fix it completely but hopefully will make the leak less offending)