Since recently I'm experiencing node memory issues during AOT compilation. As described here on the angular-cli repo, setting the --max_old_space_size helps if I build the project like
$ node --max_old_space_size=4096 ./node_modules/.bin/ng build ...
However, if I apply the same to the affected:build script it doesn't seem to be interpreted. Possible?
I have..
"affected:build": "node --max_old_space_size=4096 ./node_modules/.bin/nx affected:build"
and then I invoke it with
$ yarn affected:build --prod --base=origin/master --parallel
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Happens on my monorepo, which I unfortunately cannot upload somewhere.
Please provide any relevant information about your setup:
I think the important thing is to understand whether node params in this case are passed along or whether NX forks node and doesn't pass them along.
This is how I do it and it works perfect:
"build:prod": "node --max-old-space-size=8192 ./node_modules/@angular/cli/bin/ng build --configuration=production"
Make a special note on the dashes - in max-old-space-size not underscores.
I think this could be an issue with how the affected actions are spawned. This may change in the soon, but not immediate future.
Passing node --max_old_space_size=4096 ./node_modules/.bin/nx to Nx, does not propogate it to the spawned commands. The short term solution may be to add some sort of flag for the max size of the spawned processes. I don't know if it's possible to read how much space is allocated to the Nx process to pass that along to the CLI's process. I hope if the process was properly forked, it would share the same max size.
There is always the possibility to set the NODE_OPTIONS="--max-old-space-size=4096" ENV variable, which should propagate to all node processes that are launched. That does have the drawback, though, of propagating to all node processes launched, whether intentionally or not.
I am also having memory issues lately. NODE_OPTIONS sadly is no option for me concerning e2e tests using cypress due to this issue (but I use NODE_OPTIONS for the build at the CI):
To be a bit more specific:
ng serve prior executing the testNODE_OPTIONS fixes the ng serve memory issuesng serve is done the cypress launch process is quitting with an error, like described in the linked issue above, because of NODE_OPTIONS ...My solution was to tweak the serve config I use for the e2e test to consume less memory (disabling optimizations ...), but I am not sure how long I can survive that way - I don't like my current solution ...
Hopefully cypress will allow NODE_OPTIONS soon, or is there another option I did not think of? I don't want to spawn ng serve manually prior cypress, this is a quite handy feature. Being able to configure the the processes that nx is spawning would be great 馃憤
I think this could be an issue with how the affected actions are spawned.
@FrozenPandaz in fact that's what I assumed, didn't have the time to dig in today afternoon though.
There is always the possibility to set the
NODE_OPTIONS="--max-old-space-size=4096"ENV variable
@bridzius yep, I just wanted to try that to see whether it helps 馃槃, thx 馃憤
We can not run cypress tests with the affected command. As @skydever explained, setting env variable during command stops cypress due the bug an providing node --max_old_space_size=4096 is also not working. Any other workarounds?
@FrozenPandaz I think what we should do is spawn an npm script instead of ng directly, so folks can override "ng": "ng" to pass whatever options needed. What do you think?
@juristr would you like to work with me on fixing this? I can help you get your dev env up and running.
This is also a problem for me in my CI environment. My CircleCI account has a 4gb memory limit, and my builds always fail. e2e fails only sometimes. I'm using the latest nx monorepo versions and cypress.
This is also a problem for me in my CI environment. My CircleCI account has a 4gb memory limit, and my builds always fail. e2e fails only sometimes. I'm using the latest nx monorepo versions and cypress.
Passing by after fixing a similar issue: that could help someone, adding "maxWorkers": 2 in my angular.json configuration fixed the issue (looks like a mix of circleci exposing a huge amount of RAM and CPU with fork-ts-checker-webpack-plugin trying to use them all).
@juristr would you like to work with me on fixing this?
Hey @vsavkin sorry, totally missed ur message and just saw it now. Sure, I might find some time to work on this next week. Meanwhile I'm gonna see whether I'm still reproducing the issue as I worked around it atm via the NODE_OPTIONS env variable.
I can help you get your dev env up and running.
That'd be great if you have some suggestions 馃憤
@juristr
Currently, we can invoke commands (say affected:build) serially or in parallel.
When we invoke things serially, we directly invoke "ng". And we use spawnChild to do that.
When we invoke things in parallel, we use npm run all , which invokes the ng npm script.
The options I see are like this:
package.json. ng. This should preserve the parallel invocation.I personally prefer the first one as it gives the developer the ability to customize what they want to customize in the same fashion for serial and parallel execution
Folks, as a temporary workaround you can do the following.
In your package.json, update the ng script to set the max heap size:
{
"scripts": {
"ng": "node --max_old_space_size=8000 ./node_modules/.bin/ng"
}
}
And then run:
yarn affected:build --parallel --maxParallel=1
On Windows it only works with:
"ng": "node --max_old_space_size=8000 ./node_modules/@angular/cli/bin/ng",
@vsavkin
"ng": "node --max_old_space_size=8000 ./node_modules/@angular/cli/bin/ng",
"affected:apps": "./node_modules/.bin/nx affected:apps",
"affected:libs": "./node_modules/.bin/nx affected:libs",
"affected:build": "./node_modules/.bin/nx affected:build --base=origin/master --head=HEAD --aot --parallel",
"affected:e2e": "./node_modules/.bin/nx affected:e2e --base=origin/master --head=HEAD --browser chrome",
still get javascript heap of of memory when running npm run affected:e2e
I personally prefer the first one as it gives the developer the ability to customize what they want to customize in the same fashion for serial and parallel execution
@vsavkin Agree, gonna give it a look 馃憤
Fixed here: 82ee4f10bab6d00dc7aaa60e09d1ae29390eb683. Thank you @juristr !
Most helpful comment
There is always the possibility to set the
NODE_OPTIONS="--max-old-space-size=4096"ENV variable, which should propagate to allnodeprocesses that are launched. That does have the drawback, though, of propagating to allnodeprocesses launched, whether intentionally or not.