Graal: Graal 1.0.0-CE-RC13 runs node(graaljs) quite slower than node(v8)

Created on 6 Mar 2019  路  6Comments  路  Source: oracle/graal

Discover when running hexo based blog. You can reproduce it by installing hexo and run hexo s or hexo g on a hexo init repo.

With node(v8), it runs service or generate site within 1 second; while with node(graaljs), it costs about 30 seconds, which is definitely performance short-coming.

javascript performance

Most helpful comment

@pmlopes Yes, we have been exploring options in this direction. What we can already do for native-image AOT is cache parsed scripts. But we might need to add some support for caching the compiled code for this to be really effective.

Generally, we have several efforts underway to improve those warmup times for Truffle-based languages like JavaScript, Ruby, R, or Python. One of them is the possible introduction of a lower compilation tier so one gets faster away from the interpreted code as an intermediate step before the final highly optimized compilation. Another one is better optimizations when compiling the interpreter itself so the initial execution before compilation is faster.

So, expect to see incremental improvements in this area.

All 6 comments

Anyway, glad to see that node(graaljs) is able to run with npm apps :-)

Hi @TisonKun,

thanks for your observations.

On my machine:

  • hexo s first results in a rendered webpage on localhost:4000 after 25 seconds
  • hexo g finishes after 15 seconds wall clock time (28 files generated in 12 s)

It is true that this time is longer than on node(V8). Note however that this is a warmup issue: being based on the JVM, our engine needs longer to compile and optimize all relevant code. And hexo is a significant codebase, I count 280 modules in node_modules (not all of them will be started for hexo s, but many are). The actual loading time of a server started with hexo s will actually go down significantly if you reload the webpage often enough. This is because Graal.js is compiling the code in the background.

I will inspect the compilations of hexo s and see if there is anything we can optimize for immediately. In general, we are investing in performance and especially warmup performance so our difference to Node(V8) will get smaller over time, but we will likely never match it exactly, due to the inherent architectural difference.

Best,
Christian

Note however that this is a warmup issue: being based on the JVM, our engine needs longer to compile and optimize all relevant code.

I guess so. Users always want shorter startup time LOL

The actual loading time of a server started with hexo s will actually go down significantly if you reload the webpage often enough. This is because Graal.js is compiling the code in the background.

Observed. Thanks for your explanation!

@wirthi do you have any tips to improve startup performance? I could think of a appCDS kind of feature that caches the compiled scripts, so the whole load, parse, compile would be avoided, or some sort of combining native-image AOT with script caching?

I guess so. Users always want shorter startup time LOL

If you're running server application then you usually care more about average performance over a long period than initial performance. That also applies to JITed Java - at the beginning HotSpot VM inteprets Java bytecode which is super slow compared to running native code that is later produced by HotSpot's JIT compilers.

@pmlopes Yes, we have been exploring options in this direction. What we can already do for native-image AOT is cache parsed scripts. But we might need to add some support for caching the compiled code for this to be really effective.

Generally, we have several efforts underway to improve those warmup times for Truffle-based languages like JavaScript, Ruby, R, or Python. One of them is the possible introduction of a lower compilation tier so one gets faster away from the interpreted code as an intermediate step before the final highly optimized compilation. Another one is better optimizations when compiling the interpreter itself so the initial execution before compilation is faster.

So, expect to see incremental improvements in this area.

Was this page helpful?
0 / 5 - 0 ratings