Nixpkgs: The tarball job doesn't fit memory anymore

Created on 4 Jul 2018  路  11Comments  路  Source: NixOS/nixpkgs

The tarball job fails, reproducibly:

...
checking eval-release.nix
Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS

This now blocks unstable and unstable-small channels. I assume we've just been increasing demands slowly until hitting a threshold; it's possible there was some particular change doing a little larger increase, but I expect we need something better than just reverting such a change.

regression blocker

Most helpful comment

Any better ideas than #43021 ? I

@vcunat Not a final solution, just a way to reduce Nix memory consumption by 10% and set the problem aside for another couple of months: https://github.com/NixOS/nix/pull/2275#issuecomment-402772037

All 11 comments

We should upgrade Hydra, since it has some recent improvements to reduce memory consumption.

I believe this particular problem is _not_ about having enough free memory (in the system). It fails on my idle machine with 16 GiB RAM.

BTW, the Darwin queue is really exploding this week. If someone knows how to set up an iMac mini, I can add one to Hydra.

For me the first failing commit is 0b36a94ed4 (it fails and parent doesn't), but from the top of my head the change doesn't seem significantly expensive. Still, /cc @peti.

Hmm, that commit is certainly bound to increase memory use; overrideScope is a somewhat expensive operation. It's not going to increase memory use by a lot though.

Any better ideas than https://github.com/NixOS/nixpkgs/pull/43021 ? I've occasionally seen the error message for Hydra's jobset evaluations, so I'd hope this would get rid of those as well. (On master/unstable; I probably wouldn't backport this change, but this seems always worse on newer nixpkgs.)

From my IRC log in 2014:

17:00 < niksnut> Lethalman: btw, I did a build of Nix/Hydra against boehm-gc with --enable-large-config
17:01 < niksnut> it made the warning messages go away, but it made memory use go up from 3.8 to 4.9 GB 

However that could have been a random fluctuation.

Any better ideas than #43021 ? I

@vcunat Not a final solution, just a way to reduce Nix memory consumption by 10% and set the problem aside for another couple of months: https://github.com/NixOS/nix/pull/2275#issuecomment-402772037

Also https://github.com/NixOS/nix/pull/2278 but it may need thorough testing

Increased resource consumption was not confirmed, so I went ahead and merged rather quickly. I see Hydra's evaluator occasionally running into a similar error message, so I'd recommend using the change there as well (unless it really gets out of RAM).

Decreasing RAM consumption can _and should_ continue nevertheless, on nix and nixpkgs front. The evaluation checks are an extreme case, but we were apparently hitting an 8 GiB threshold, and I can imagine generally the waste of RAM/network/drive may be a deal-breaker for some potential nix* use cases...

Well, now instead of

Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS

we get

GC Warning: Failed to expand heap by 65536 bytes
GC Warning: Out of Memory! Heap size: 9382 MiB. Returning NULL!
error: out of memory

https://gist.github.com/GrahamcOfBorg/7e4e2eb4ee96208b8db38e3b38e9c4fc

Could that be the machine not having enough RAM?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

copumpkin picture copumpkin  路  3Comments

copumpkin picture copumpkin  路  3Comments

ob7 picture ob7  路  3Comments

sid-kap picture sid-kap  路  3Comments

edolstra picture edolstra  路  3Comments