Current Behavior
With Serverless-offline v6.8.0 my httpApi requests take 3-4x longer to execute and memory usage continues to grow until it eventually crashes. With v6.6.0 all of my handler execute quickly and memory usage does not grow. v6.7.0 doesn't seem to work at all so I've reverted to v6.6.0 for now.
I've tried differnet versions of serverless, serverless-webpack and serverless-offline. The issue is consistently present with serverless-offline v6.8.0.
Sample Code
It doesn't seem to matter what handler I call, the behavior is the same.
Expected behavior/code
Environment
serverless version: v1.72.0serverless-offline version: v6.8.0node.js version: v12.9OS: macOS 10.15.7Possible Solution
Revert to v6.6.0
Additional context/Screenshots
<--- Last few GCs --->
[2193:0x103005000] 1287260 ms: Scavenge 2034.7 (2044.9) -> 2030.9 (2045.1) MB, 3.4 / 0.0 ms (average mu = 0.146, current mu = 0.113) allocation failure
[2193:0x103005000] 1287275 ms: Scavenge 2036.2 (2046.4) -> 2033.0 (2047.4) MB, 5.9 / 0.0 ms (average mu = 0.146, current mu = 0.113) allocation failure
[2193:0x103005000] 1287290 ms: Scavenge 2037.1 (2047.4) -> 2033.7 (2058.2) MB, 4.6 / 0.0 ms (average mu = 0.146, current mu = 0.113) allocation failure
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x1009ce8d9]
Security context: 0x3a2f185408d1
1: String(aka String) 0x3a2f1854c399
2: /* anonymous */ [0x3a2f7b3d0281] [/Users/omarkhattab/Desktop/finley/serverless/core/node_modules/webpack-sources/node_modules/source-map/lib/source-node.js:~342] [pc=0...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x1011bdf85 node::Abort() (.cold.1) [/usr/local/bin/node]
2: 0x10009d569 node::Abort() [/usr/local/bin/node]
3: 0x10009d6cf node::OnFatalError(char const, char const) [/usr/local/bin/node]
4: 0x1001de957 v8::Utils::ReportOOMFailure(v8::internal::Isolate, char const, bool) [/usr/local/bin/node]
5: 0x1001de8f7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate, char const, bool) [/usr/local/bin/node]
6: 0x100364635 v8::internal::Heap::FatalProcessOutOfMemory(char const) [/usr/local/bin/node]
7: 0x100365e8a v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/local/bin/node]
8: 0x10036290e v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
9: 0x1003606c0 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
10: 0x10035f711 v8::internal::Heap::HandleGCRequest() [/usr/local/bin/node]
11: 0x100324e71 v8::internal::StackGuard::HandleInterrupts() [/usr/local/bin/node]
12: 0x100688fcc v8::internal::Runtime_StackGuard(int, unsigned long, v8::internal::Isolate*) [/usr/local/bin/node]
13: 0x1009ce8d9 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]
14: 0x100a0ff4e Builtins_StringConstructor [/usr/local/bin/node]
[1] 2193 abort sls offline --port 5000
Significantly slower is an understatement. When I switched to 6.8.0 my requests went from taking 100ms to taking 1-2 seconds. When running inspect / debugging in VSCode requests were taking 7-10 seconds.
serverless version: v1.83.0
serverless-offline version: v6.8.0
node.js version:v10.19.0
OS: macOS 10.15.7
Yeah this needs to get fixed it slows down development by a lot, its more worthwhile to just mock data and run functions using serverless invoke because this is to slow.
Still no movement on this? I actually had to revert back to 6.5 because 6.6 is also running extremely slow when debugging in VSCode.
most likely either #1091 or #931
How is the search going for a fix on this?
I'm still using 6.5. Not sure if the suggested #1091 or #931 fix the issue. Will try to test out sometime soon.
Oh I thought they had released a newer version since 6.8. Yeah. This is still an issue, basically stuck on 6.5 until this is resolved.
FWIW - I was stating that I suspect the performance issues came in on #1091 and/or #931 - not 100% certain though. I don't think there are any fixes going on at the moment
Same here see my comment https://github.com/serverless/serverless/issues/6503#issuecomment-789740511
I rollback to 6.5.0 and it works without the memory leak.
WORKAROUND: lock to "serverless-offline": "6.5.0",
Yep switched back to 6.5.0 from 6.8.0 and requests sped up about 10x.
Does anyone have an example Serverless project (and some bash script which triggers repeated requests) which replicates this issue? I'd be happy to start investigating a solution in my free time.
@james-relyea The main issue is when you are running it with node --inspect.
I have multiple projects that use this and they all exhibit the same behavior. This is a typical setup when running projects using VSCode and debug.
node --inspect ./node_modules/.bin/serverless offline --noPrependStageInUrl
"scripts": {
"serve": "serverless offline --noPrependStageInUrl",
"test": "echo \"Error: no test specified\" && exit 1",
"start": "./node_modules/.bin/serverless offline --noPrependStageInUrl",
"debug": "node --inspect ./node_modules/.bin/serverless offline --noPrependStageInUrl",
}
Still no traction on this issue? Would be really nice to be able to upgrade to the latest version.
I can confirm @bryantbiggs' suspicion - the memory leak was introduced by #1050/#1091 (which is also in violation of the MIT license for the clear-module project. So we need to update the serverless-offline license to attribute that code being copied from the original author).
Regarding the leak, it looks to be caused by the clearModule function with the deleting of require.cache entries for node_modules: https://github.com/dherault/serverless-offline/blob/master/src/lambda/handler-runner/in-process-runner/InProcessRunner.js#L54
I suppose as a temporary workaround - you can use the --allowCache flag which will prevent clearModule from running and leaking memory.
Does anyone know the reasoning behind clearing out the require.cache entries for node_modules? Is that even needed for HMR? I would think we would only need to be clearing out caches for the actual serverless API handlers src. I can open a PR to remove that logic but would like to get a better understanding of why it's there to begin with...
If I alter the parameters for the invocation to set cleanup to false to prevent node_modules caches from being wiped, it seems to stop the leak: clearModule(this.#handlerPath, { cleanup: false }) (https://github.com/dherault/serverless-offline/blob/master/src/lambda/handler-runner/in-process-runner/InProcessRunner.js#L102)
@james-relyea
Does anyone know the reasoning behind clearing out the
require.cacheentries for node_modules?
I believe that was the case. but you can not reliably remove modules from the cache, memory will always be allocated. you can search the issues, it has been mentioned and discussed ad nauseam.
it works only reliably without a memory leak if one starts a new process or new child process. or better: worker threads. I added those some time ago. the only thing missing was any HMR file watching + reloading functionality.
@dnalborczyk Gotcha - that makes sense. Maybe we could just have this allowCache enabled by default until we can add that functionality in?
it works only reliably without a memory leak if one starts a new process or new child process. or better: worker threads. I added those some time ago. the only thing missing was any HMR file watching + reloading functionality.
馃憤 I'd be willing to spend some time looking into adding that in.
Most helpful comment
Yep switched back to
6.5.0from6.8.0and requests sped up about 10x.