Hyper: Renderer lags, sometimes crashes on large command output

Created on 5 Aug 2016  路  8Comments  路  Source: vercel/hyper

Commands that return lots of output tend to crash the renderer, or at least make it very slow. A minimal example is seq 1 1000000, though I've hit the issue in practice a few times (e.g. cating an unexpectedly large file).

Looks like this when it crashes:

hyperterm

For reference this is how Terminal.app behaves:

terminal

I can reproduce this inside tmux as well.

Bug

Most helpful comment

I did some tests with throttling the data that is written to hterm, so the data is collected in a buffer and written only if the buffer size exceeds a configured limit or if there was enough time between two data events. This does improve the performance quite a bit (because I believe it does not force hterm to re-render with every chunk received), but only seems to be a part of the solution. I feel that the proper solution would be to offload hterm.VT parser (https://chromium.googlesource.com/apps/libapps/+/master/hterm/js/hterm_vt.js) to a webworker or the main process, so the heavy input parsing can happen in a separate thread without blocking the UI thread.

As a general improvement, we could wrap hterm into a webview, which internally spawns a new thread and stops one busy hterm instance to block hyper's entire ui thread.

Oh, and thanks for bringing us hyper, love it!

All 8 comments

This will need some magic, maybe limit/batch the output in some way, I've been investigating some other ways to communicate between the renderer and the main process..

Related: #94, #555

This is a real killer if you're working with zips unzip ..oh.

As an aside it doesn't seem to crash the actual process as the zip does get completely extracted.

Matt

I did some tests with throttling the data that is written to hterm, so the data is collected in a buffer and written only if the buffer size exceeds a configured limit or if there was enough time between two data events. This does improve the performance quite a bit (because I believe it does not force hterm to re-render with every chunk received), but only seems to be a part of the solution. I feel that the proper solution would be to offload hterm.VT parser (https://chromium.googlesource.com/apps/libapps/+/master/hterm/js/hterm_vt.js) to a webworker or the main process, so the heavy input parsing can happen in a separate thread without blocking the UI thread.

As a general improvement, we could wrap hterm into a webview, which internally spawns a new thread and stops one busy hterm instance to block hyper's entire ui thread.

Oh, and thanks for bringing us hyper, love it!

I have similar problem when large amount of info goes to the console, an example when watch logs docker logs on some running app

$ docker logs -f <container-id>

I can do this normally on other terminals

See comment here about using the xterm branch: https://github.com/zeit/hyper/issues/687#issuecomment-322203680

We can probably close this one as a duplicate of #687.

An easy way to reproduce this is

$ du /

This _should_ be fixed now!

Let us know if it's not.

Still having this problem on Hyper 2.0.0, running seq 1 1000000 freezes the terminal.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

leo picture leo  路  3Comments

yvan-sraka picture yvan-sraka  路  3Comments

cooperpellaton picture cooperpellaton  路  3Comments

aem picture aem  路  3Comments

dbkaplun picture dbkaplun  路  3Comments