Node: build: investigate jumbo builds

Created on 13 Feb 2018  路  11Comments  路  Source: nodejs/node

A jumbo build concatenates compilation units into a single file before compiling. E.g.:

$ cat $all_sources > all.cc && c++ all.cc

It trades build parallelism and memory consumption for:

  1. Usually better generated code; kind of a poor man's LTO.
  2. CPU time. Because the compiler has to parse headers only once, it's often dramatically faster.

V8 in particular is a good candidate and already supports jumbo builds. To illustrate:

  1. V8, clean normal build, make -j8: 6:30m wall clock time, 47:48m cpu time
  2. V8, clean jumbo build, make -j1: 4:37m wall clock time, 4:34m cpu time

That's not a typo! On my machine it's 33% faster in human time and a whopping 10x faster in cpu time.

The one downside is that it needs a lot of memory. Without sharding you probably shouldn't try this on a machine with less than 8 GB RAM.

V8 Engine build help wanted

Most helpful comment

@gireeshpunathil Since the compiler has full visibility it can do whole-program optimization (like escape analysis, inlining, pruning, etc.) that it normally only does on a per-compilation unit basis without LTO.

All 11 comments

out of curiosity; how does the benefit comes for the generated code? is it more code comes for compiplation at a time provides improved escape analysis, or better method inline-ability, or something else? who benefits by large - C or C++?

@gireeshpunathil Since the compiler has full visibility it can do whole-program optimization (like escape analysis, inlining, pruning, etc.) that it normally only does on a per-compilation unit basis without LTO.

Is there any noticeable improvement in any benchmarks after building this way?

I would not bet on any performance win through jumbo builds. It's a crutch to improve build time. It's not officially supported and can break in V8 at any time. And it did on several occasions, only to be fixed by friendly contributors from Opera. The V8 team at Google does not use jumbo builds because we use goma.

I would not bet on any performance win through jumbo builds.

I didn't see any on my big brawny Intel desktop but it seems to make node start up a little faster on a Raspberry Pi 1 (closer to 400 than 500 ms now.)

Working hypothesis: better code density and/or less duplication in the parser and compiler. Perhaps we could also get that by turning on -Wl,--gc-sections; didn't test.

https://bugs.chromium.org/p/v8/issues/detail?id=7339 - upstream now tests jumbo builds regularly.

http://lists.llvm.org/pipermail/cfe-dev/2018-April/057579.html - there seems to be some movement on supporting this in clang, which is great.

Any updates on this? We should be able to at least enable use_jumbo_build for v8 right?

V8's GN build knows about jumbo builds but the *.gyp files we maintain don't. It's probably easier now to hack it into our gyp fork more than anything else. I did a prototype for the Makefile generator I could PR.

Ping @bnoordhuis ... is there reason to keep this open?

Sorry, I forgot to close this. V8 removed support for jumbo builds in v8/v8@e6f62a41f5ee1524eb6fefb4bbb699373f613b1e so there's no longer any reason for us to pursue this.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

cong88 picture cong88  路  3Comments

fanjunzhi picture fanjunzhi  路  3Comments

sandeepks1 picture sandeepks1  路  3Comments

loretoparisi picture loretoparisi  路  3Comments

danielstaleiny picture danielstaleiny  路  3Comments