Hyper: Slow for big logs

Created on 2 Sep 2016  路  48Comments  路  Source: vercel/hyper

Problem
I am using hyperterm together with SSH and tmux in version 0.7.1 (0.7.1.36) . When using cat or docker logs <xyz>, hypterterm is extremely slow and then completely hangs. Hypterterm is then unusable.

As I can still attach to the existing tmux session with OSX's terminal or iterm2, I know for sure that the tmux session is okay. Hypterterm seems to have problems rendering large output files.

Can anyone else reproduce this or is this just me having this issue?

Some data:
Hyperterm version: 0.7.1 (0.7.1.36)
OSX version: 10.11.2 (15C50)

help wanted Performance

Most helpful comment

I did a little test and ran both Hyper and Terminal.app with long outputs, mostly to test if it's the number of lines, rows, total characters that slow Hyper down, as welll as test different characters (i thought it may be parsing the output for highlighting or similar).

Results seem to indicate that it's mostly a function of total # of characters. But I'm pretty sure there's something fundamentally wrong, as i now have a Helper process using 15GB of RAM.

The times are surprisingly close to a linear relationship. Digging through a profile with Instruments didn't get me to anything specific, but it appeared as if there was an awful lot of of notification subscribing- and calling going on. I bet a strategically placed debounce would improve performance dramatically.

Also: Emojis count for 10 characters.

| Lines | Char | Char/Line | Total Chars | Time (sec) Terminal | Time (sec) Hyper | CPU Hyper |
|------:|-----:|----------:|------------:| ------------------|-------------------|----------- |
| 1 | . | 1 | 1 | 1.42 | 1.20 | |
| 1 | . | 10 | 10 | 1.41 | 1.19 | |
| 1 | . | 100 | 100 | 1.42 | 1.20 | |
| 1 | . | 1000 | 1000 | 1.42 | 1.20 | |
| 1 | . | 10000 | 10000 | 1.41 | 1.20 | |
| 10 | . | 1 | 10 | 1.41 | 1.20 | |
| 10 | . | 10 | 100 | 1.42 | 1.20 | |
| 10 | . | 100 | 1000 | 1.41 | 1.20 | |
| 10 | . | 1000 | 10000 | 1.41 | 1.20 | |
| 10 | . | 10000 | 100000 | 1.41 | 1.20 | |
| 100 | . | 1 | 100 | 1.41 | 1.20 | |
| 100 | . | 10 | 1000 | 1.41 | 1.20 | |
| 100 | . | 100 | 10000 | 1.41 | 1.20 | |
| 100 | . | 1000 | 100000 | 1.42 | 4.42 | 17 |
| 100 | . | 10000 | 1000000 | 1.41 | 4.40 | 47 |
| 1000 | . | 1 | 1000 | 1.41 | 1.20 | |
| 1000 | . | 10 | 10000 | 1.41 | 1.21 | |
| 1000 | . | 100 | 100000 | 1.42 | 1.20 | |
| 1000 | . | 1000 | 1000000 | 1.42 | 4.40 | 47 |
| 1000 | . | 10000 | 10000000 | 1.42 | 20.45 | 79 |
| 10000 | . | 1 | 10000 | 1.41 | 1.20 | |
| 10000 | . | 10 | 100000 | 1.41 | 4.41 | 13 |
| 10000 | . | 100 | 1000000 | 1.41 | 4.41 | 47 |
| 10000 | . | 1000 | 10000000 | 1.41 | 20.44 | 80 |
| 10000 | . | 10000 | 100000000 | 1.42 | 196.92 | 91 |
| 1 | { | 1 | 1 | 1.41 | 1.19 | |
| 1 | { | 10 | 10 | 1.41 | 1.19 | |
| 1 | { | 100 | 100 | 1.41 | 1.20 | |
| 1 | { | 1000 | 1000 | 1.42 | 1.19 | |
| 1 | { | 10000 | 10000 | 1.41 | 1.20 | |
| 10 | { | 1 | 10 | 1.41 | 1.19 | |
| 10 | { | 10 | 100 | 1.41 | 1.19 | |
| 10 | { | 100 | 1000 | 1.41 | 1.20 | |
| 10 | { | 1000 | 10000 | 1.42 | 1.19 | |
| 10 | { | 10000 | 100000 | 1.42 | 1.19 | |
| 100 | { | 1 | 100 | 1.42 | 1.19 | |
| 100 | { | 10 | 1000 | 1.41 | 1.19 | |
| 100 | { | 100 | 10000 | 1.42 | 1.19 | |
| 100 | { | 1000 | 100000 | 1.42 | 1.20 | |
| 100 | { | 10000 | 1000000 | 1.41 | 4.39 | 46 |
| 1000 | { | 1 | 1000 | 1.41 | 1.20 ||
| 1000 | { | 10 | 10000 | 1.42 | 1.21 | |
| 1000 | { | 100 | 100000 | 1.42 | 1.21 | |
| 1000 | { | 1000 | 1000000 | 1.42 | 4.40 | 47 |
| 1000 | { | 10000 | 10000000 | 1.41 | 20.45 | 79 |
| 10000 | { | 1 | 10000 | 1.41 | 1.20 | |
| 10000 | { | 10 | 100000 | 1.43 | 1.20 | |
| 10000 | { | 100 | 1000000 | 1.42 | 4.40 | 46 |
| 10000 | { | 1000 | 10000000 | 1.41 | 20.45 | 74 |
| 10000 | { | 10000 | 100000000 | 1.41 | 193.56 | 92 |
| 1 | 馃嵎 | 1 | 1 | 1.42 | 1.20 | |
| 1 | 馃嵎 | 10 | 10 | 1.41 | 1.19 | |
| 1 | 馃嵎 | 100 | 100 | 1.41 | 1.19 | |
| 1 | 馃嵎 | 1000 | 1000 | 1.42 | 1.19 | |
| 1 | 馃嵎 | 10000 | 10000 | 1.42 | 1.19 | |
| 10 | 馃嵎 | 1 | 10 | 1.41 | 1.20 | |
| 10 | 馃嵎 | 10 | 100 | 1.41 | 1.19 | |
| 10 | 馃嵎 | 100 | 1000 | 1.41 | 1.19 | |
| 10 | 馃嵎 | 1000 | 10000 | 1.41 | 1.20 | |
| 10 | 馃嵎 | 10000 | 100000 | 1.42 | 4.39 | 48 |
| 100 | 馃嵎 | 1 | 100 | 1.41 | 1.20 | |
| 100 | 馃嵎 | 10 | 1000 | 1.41 | 1.19 | |
| 100 | 馃嵎 | 100 | 10000 | 1.42 | 1.19 | |
| 100 | 馃嵎 | 1000 | 100000 | 1.41 | 4.40 | 37 |
| 100 | 馃嵎 | 10000 | 1000000 | 1.41 | 10.81 | 70 |
| 1000 | 馃嵎 | 1 | 1000 | 1.41 | 1.20 | |
| 1000 | 馃嵎 | 10 | 10000 | 1.41 | 1.20 | |
| 1000 | 馃嵎 | 100 | 100000 | 1.41 | 4.41 | 48 |
| 1000 | 馃嵎 | 1000 | 1000000 | 1.41 | 14.04 | 75 |
| 1000 | 馃嵎 | 10000 | 10000000 | 1.41 | 136.29 | 90 |
| 10000 | 馃嵎 | 1 | 10000 | 1.42 | 1.22 | |
| 10000 | 馃嵎 | 10 | 100000 | 1.42 | 4.44 | 32 |
| 10000 | 馃嵎 | 100 | 1000000 | 1.42 | 14.12 | 74 |
| 10000 | 馃嵎 | 1000 | 10000000 | 1.41 | 137.11 | 90 |
| 10000 | 馃嵎 | 10000 | 100000000 | 1.43 | 1567.05 | 86 |

Tagging #94, #555, #881 and #1169 as probably identical issues.

All 48 comments

Can confirm ... using docker logs on a bigger rails app running migrations hangs HyperTerm completely for me. No issues with the native MacOSX terminal though.

We may need to fork hterm and add some kind of buffering/caching

@MrRio That would be a good idea. Mostly all my PR are for patching hterm lacking functionnality.

I also get this hang issue when I push a lot of text at hyperterm. I use tmux and the underlying tmux session seems fine, I can open iTerm and connect just fine and everything still works. I just can't seem to type and get a response via hyperterm once it seems to lock up.

Same thing happens without tmux, just by displaying a file with very long content via cat. I noticed that memory usage goes towards my machine's maximum when the heavy slowdowns start (so that's probably why).

Related issue https://github.com/zeit/hyperterm/issues/571

For macOS, looking at Terminal and iTerm, there is a notion of a scrollback buffer where you can limit the number of lines to which you can scroll back to view. Performance would likely increase if it only had to render the most-recent _n_ lines rather than _all_ of the lines.

I did a little test and ran both Hyper and Terminal.app with long outputs, mostly to test if it's the number of lines, rows, total characters that slow Hyper down, as welll as test different characters (i thought it may be parsing the output for highlighting or similar).

Results seem to indicate that it's mostly a function of total # of characters. But I'm pretty sure there's something fundamentally wrong, as i now have a Helper process using 15GB of RAM.

The times are surprisingly close to a linear relationship. Digging through a profile with Instruments didn't get me to anything specific, but it appeared as if there was an awful lot of of notification subscribing- and calling going on. I bet a strategically placed debounce would improve performance dramatically.

Also: Emojis count for 10 characters.

| Lines | Char | Char/Line | Total Chars | Time (sec) Terminal | Time (sec) Hyper | CPU Hyper |
|------:|-----:|----------:|------------:| ------------------|-------------------|----------- |
| 1 | . | 1 | 1 | 1.42 | 1.20 | |
| 1 | . | 10 | 10 | 1.41 | 1.19 | |
| 1 | . | 100 | 100 | 1.42 | 1.20 | |
| 1 | . | 1000 | 1000 | 1.42 | 1.20 | |
| 1 | . | 10000 | 10000 | 1.41 | 1.20 | |
| 10 | . | 1 | 10 | 1.41 | 1.20 | |
| 10 | . | 10 | 100 | 1.42 | 1.20 | |
| 10 | . | 100 | 1000 | 1.41 | 1.20 | |
| 10 | . | 1000 | 10000 | 1.41 | 1.20 | |
| 10 | . | 10000 | 100000 | 1.41 | 1.20 | |
| 100 | . | 1 | 100 | 1.41 | 1.20 | |
| 100 | . | 10 | 1000 | 1.41 | 1.20 | |
| 100 | . | 100 | 10000 | 1.41 | 1.20 | |
| 100 | . | 1000 | 100000 | 1.42 | 4.42 | 17 |
| 100 | . | 10000 | 1000000 | 1.41 | 4.40 | 47 |
| 1000 | . | 1 | 1000 | 1.41 | 1.20 | |
| 1000 | . | 10 | 10000 | 1.41 | 1.21 | |
| 1000 | . | 100 | 100000 | 1.42 | 1.20 | |
| 1000 | . | 1000 | 1000000 | 1.42 | 4.40 | 47 |
| 1000 | . | 10000 | 10000000 | 1.42 | 20.45 | 79 |
| 10000 | . | 1 | 10000 | 1.41 | 1.20 | |
| 10000 | . | 10 | 100000 | 1.41 | 4.41 | 13 |
| 10000 | . | 100 | 1000000 | 1.41 | 4.41 | 47 |
| 10000 | . | 1000 | 10000000 | 1.41 | 20.44 | 80 |
| 10000 | . | 10000 | 100000000 | 1.42 | 196.92 | 91 |
| 1 | { | 1 | 1 | 1.41 | 1.19 | |
| 1 | { | 10 | 10 | 1.41 | 1.19 | |
| 1 | { | 100 | 100 | 1.41 | 1.20 | |
| 1 | { | 1000 | 1000 | 1.42 | 1.19 | |
| 1 | { | 10000 | 10000 | 1.41 | 1.20 | |
| 10 | { | 1 | 10 | 1.41 | 1.19 | |
| 10 | { | 10 | 100 | 1.41 | 1.19 | |
| 10 | { | 100 | 1000 | 1.41 | 1.20 | |
| 10 | { | 1000 | 10000 | 1.42 | 1.19 | |
| 10 | { | 10000 | 100000 | 1.42 | 1.19 | |
| 100 | { | 1 | 100 | 1.42 | 1.19 | |
| 100 | { | 10 | 1000 | 1.41 | 1.19 | |
| 100 | { | 100 | 10000 | 1.42 | 1.19 | |
| 100 | { | 1000 | 100000 | 1.42 | 1.20 | |
| 100 | { | 10000 | 1000000 | 1.41 | 4.39 | 46 |
| 1000 | { | 1 | 1000 | 1.41 | 1.20 ||
| 1000 | { | 10 | 10000 | 1.42 | 1.21 | |
| 1000 | { | 100 | 100000 | 1.42 | 1.21 | |
| 1000 | { | 1000 | 1000000 | 1.42 | 4.40 | 47 |
| 1000 | { | 10000 | 10000000 | 1.41 | 20.45 | 79 |
| 10000 | { | 1 | 10000 | 1.41 | 1.20 | |
| 10000 | { | 10 | 100000 | 1.43 | 1.20 | |
| 10000 | { | 100 | 1000000 | 1.42 | 4.40 | 46 |
| 10000 | { | 1000 | 10000000 | 1.41 | 20.45 | 74 |
| 10000 | { | 10000 | 100000000 | 1.41 | 193.56 | 92 |
| 1 | 馃嵎 | 1 | 1 | 1.42 | 1.20 | |
| 1 | 馃嵎 | 10 | 10 | 1.41 | 1.19 | |
| 1 | 馃嵎 | 100 | 100 | 1.41 | 1.19 | |
| 1 | 馃嵎 | 1000 | 1000 | 1.42 | 1.19 | |
| 1 | 馃嵎 | 10000 | 10000 | 1.42 | 1.19 | |
| 10 | 馃嵎 | 1 | 10 | 1.41 | 1.20 | |
| 10 | 馃嵎 | 10 | 100 | 1.41 | 1.19 | |
| 10 | 馃嵎 | 100 | 1000 | 1.41 | 1.19 | |
| 10 | 馃嵎 | 1000 | 10000 | 1.41 | 1.20 | |
| 10 | 馃嵎 | 10000 | 100000 | 1.42 | 4.39 | 48 |
| 100 | 馃嵎 | 1 | 100 | 1.41 | 1.20 | |
| 100 | 馃嵎 | 10 | 1000 | 1.41 | 1.19 | |
| 100 | 馃嵎 | 100 | 10000 | 1.42 | 1.19 | |
| 100 | 馃嵎 | 1000 | 100000 | 1.41 | 4.40 | 37 |
| 100 | 馃嵎 | 10000 | 1000000 | 1.41 | 10.81 | 70 |
| 1000 | 馃嵎 | 1 | 1000 | 1.41 | 1.20 | |
| 1000 | 馃嵎 | 10 | 10000 | 1.41 | 1.20 | |
| 1000 | 馃嵎 | 100 | 100000 | 1.41 | 4.41 | 48 |
| 1000 | 馃嵎 | 1000 | 1000000 | 1.41 | 14.04 | 75 |
| 1000 | 馃嵎 | 10000 | 10000000 | 1.41 | 136.29 | 90 |
| 10000 | 馃嵎 | 1 | 10000 | 1.42 | 1.22 | |
| 10000 | 馃嵎 | 10 | 100000 | 1.42 | 4.44 | 32 |
| 10000 | 馃嵎 | 100 | 1000000 | 1.42 | 14.12 | 74 |
| 10000 | 馃嵎 | 1000 | 10000000 | 1.41 | 137.11 | 90 |
| 10000 | 馃嵎 | 10000 | 100000000 | 1.43 | 1567.05 | 86 |

Tagging #94, #555, #881 and #1169 as probably identical issues.

Thanks a lot @MatthiasWinkelmann. This will certainly allow us to improve. CCing @dotcypress

I've investigated this a bit further. Doing a cat bigfile.manylines.txt creates a stream of about 1,000 actions per second:

screen shot 2017-01-09 at 18 10 46

I'm pretty sure these should be buffered to <= 60FPS as early as possible, but definitely before they hit the renderer. The following is a CPU profile captured with the developer tools.

screen shot 2017-01-09 at 20 21 56

I tried to implement some sort of debouncing in https://github.com/zeit/hyper/blob/master/app/session.js#L56 but unfortunately couldn't get it to work reliably.

Another possibly easy performance win may be to switch the latter two terms in https://github.com/zeit/hyper/blob/1b6d925524f30148ead6c46326a0d47964d120b5/lib/hterm.js#L158: The runes(text) takes 15 to 200 times as much time as the the regex in containsNonLatinCodepoints(text) in a short test I did (35650ms vs. 150ms for one very long line, 45ms vs. 3ms for 10000 iterations on a 20-char string). As can be seen in the CPU profile聽it represents about half of the CPU time of echoing text to the terminal.

In a CPU profile I created with the MacOS Activity Monitor, I also saw a lot of activity related to memory allocation/garbage collecting, which is possibly caused by runes as well since it creates an array on each call. But I'm unsure if that's already included in the Chrome profile.

Probable duplicates of this are #474 #1040, #1044, #571, #574 , #1237, #1221 as well the ones tagged two messages up, and possibly #1157 cc @dotcypress

Awesome work @MatthiasWinkelmann. I was just discussing with @nw that another performance win will be to bypass the Redux reducer (which creates a temporary Write object).

Instead, we can emit as part of a side effect, and then subscribe from the Term directly (pub/sub) style

Also copying @hharnisc who was looking at the debouncing problem

Thanks for the heads up on this thread @rauchg, digging into some hterm source code this evening 馃檱

I tried buffering in a couple different points in the write pipeline. But this iteration has been the most successful.

// call at most ~60 times per second to avoid overloading renderer, queue if overloading
const rateLimitedDispatch = rateLimit(store_.dispatch, 16);
rpc.on('session data', ({uid, data}) => {
  rateLimitedDispatch(sessionActions.addSessionData(uid, data));
});

It's super simple, if the function isn't rate limited call it immediately otherwise queue up the function.

I tested this with cat largefile.txt, and it didn't lock up the ui for me (current release did). It takes a bit for the text to stream out when buffering so I'm not sure if this is good enough. Still more that could be done in terms of optimization.

@MatthiasWinkelmann would you be up for trying this again with this fork/branch?

https://github.com/hharnisc/hyper/tree/buffer-session-data

cc/ @rauchg

@hharnisc as another iteration of this, it'd be really cool to _merge_ the actions into one object, and merge the payload ?

Just tested this. Way better. Only issue is, I don't know if the 16 approximation is good enough? Because I still can't interrupt yes with Ctrl+C (I can do it with Terminal.app)

Would be very nice to merge events. Using some of the information above, I think we could merge into 10K character chunks. Or of the input string is large break it up into 10K chunks.

Looks like the approximation isn't quite good enough. I'll use yes + ctrl-c as a test 馃槃

For me the performance on Hyper 1.3.0 is worse than it was on 1.2.1

It froze again completely on a large input. This made me step away from Hyper again because I can not use it any more. Hope this can be fixed.

I am using Ubuntu 16.10 and Hyper 1.3.0

+1, no issue with version 1.2.1. Version 1.3.0 and 1.3.1 keep freezing on Mac OS 10.12.3

Having the same issue. Verified everything is fine on 1.2.x, but breaks on 1.3.x.

1.3.1 here on macOS 10.12.4, same issue. Slow or even freezed.

Someone, please. This provides a terrible experience to anyone who tries out Hyper and finds it unable to cat a file because of a freeze. 馃槩

It's been here for months with no fix.

@Marahin: In yesterday's intro to the live coding on twitch @rauchg said they we're going to tackle this issue in the session and that he already knew what needs to be done to fix it. I didn't follow the whole session so unsure if they actually managed to fix it.

@philippbosch thank you for pointing that out. I had no knowledge of this, as I did not follow the live coding session.

Fingers crossed then, I hope this will finally get fixed as the history of this issue (and issues related to this one) reaches out almost a year back.

Bump. It's been next couple of days, we've got two minor releases, but not a fix for this breaking bug. Is anyone even working on it...?

Performance is a subject than we trying to address. We made unsuccessful tries for the moment.
Is your breaking bug the same than #1770? Uncaught TypeError: Cannot read property 'wcNode' of undefined?

Hyper 1.3.3
Electron 1.4.16
darwin x64 16.6.0

This issue still exists. I can't even print moderate text files. My test was for 26845 characters, hyper froze at 9386 characters. This is a lot less than @MatthiasWinkelmann 's test of 1000000 dots in 100 lines which took 4.40 seconds in his case.

Developer tools shows error bundle.js:47 Uncaught TypeError: Cannot read property 'wcNode' of undefined. Here is my log stack if that helps.

R.Terminal.print                  @bundle.js:47
print                             @bundle.js:47
R.VT.parseUnknown_                @bundle.js:47
R.VT.interpret                    @bundle.js:47
R.Terminal.interpret              @bundle.js:47
P.hterm.Terminal.IO.writeUTF8     @bundle.js:41
write                             @bundle.js:5
P                                 @bundle.js:4
T                                 @bundle.js:4
(anonymous function)              @index.js:46
ue                                @bundle.js:1
effect                            @bundle.js:1
T                                 @bundle.js:4
(anonymous function)              @index.js:46
ue                                @bundle.js:1
(anonymous function)              @bundle.js:1
requestAnimationFrame             @bundle.js:47

@chabou I'm sorry, but as far as I reach with my memory, it was not. It just took half a minute to unfreeze (or more, for even more logs); that's it. I'm not retrying Hyper until the issue is fixed, as this makes it unusable for my job.

What's interesting is, the history command doesn't seem to freeze it. I have 5500 lines in my bash history each with command of an average length of about 25 characters. That would make it 137500 characters in total [without counting line number and a space after that. That would add 26300 more characters].

The UI froze for like half a second and then completely recovered. But it froze after printing mere 9386 characters as in my last message.

I don't know the internals of Hyper, but this feels really out of place. Did any one else try history command?

It's not specific to history or any other source: Hyper is dog-slow in rendering long strings. This here should take about 8 seconds:

ruby -e 'print ("." * 2000 + "\n") * 10000 ';

Add a zero to one of the numbers, and you'll wait 80 seconds. I've tried around and it appears to me to be a linear effect, which would exclude a number of easy-to-fix problems.

Note that the program generating the output (ruby in this case) will finish very quickly. It's send to a buffer somewhere, and Hyper will still be working on it long after the ruby process is gone. That makes it a bit difficult to actually measure pragmatically, otherwise I would have included time measuring above.

One may think that rendering a million chars is useless enough not to happen too often. But I've had to abandon Hyper with a heavy heart when I would get Hyper stuck around ten times during a usual workday. It just became a constant, nagging thought in the back of my head: 'could this trigger the bug, and require me to restart Hyper and recreate all these open panels?'. That's just not worth it for nice transparency effects.

One may think that rendering a million chars is useless enough not to happen too often

This is the worst. Like, if you're working as a programmer or anyone where you sometimes have to access the logs, it's just unusable. cat log/development.log and you're done, you have to take another 2 minutes to get everything going as it was before. :-1:

Performance will be improved by replacing hterm by xterm.js. But maybe not as much as expected for this type of extreme benchmarking.
I tried @MatthiasWinkelmann example with a zero added (200M of chars!) on our WIP xterm branch:
ruby -e 'print ("." * 2000 + "\n") * 100000 ';
It took 48s to show all lines versus 8s for native Terminal app.

But it is robust. Hyper didn't hang up like current release.

@Marahin I understand your concerns but cating over 1M of chars in terminal is generally a user's mistake. In this case, it should be robust, not necessarily performant.

Imo, a real life example is: cat /etc/services (almost 14k lines):

  • _Native Terminal app_: 88ms
  • _Current v1.3.3 Hyper release_: 4019ms
  • _WIP Hyper xterm branch_: 154ms

Totally acceptable and promising 馃槏

Thank you for your patience 鉂わ笍
Be sure that we are taking this issue seriously.

a user's mistake

user's mistake

Current v1.3.3 Hyper release: 4019ms

I'm very sorry to ask this @chabou, but are you out of your mind? A little bit over 4 (literally: FOUR) seconds compared to Terminal's 88 milisecond? This is 45x slower than the default terminal. Do you really blame user for whining about that? :open_mouth:

xterm branch seems way better and closer to what you would expect in 2017 with modern computers, but I belive that the whole issue was about [then] current version (which I suppose was NOT the xterm one).

I do NOT blame anybody about anything.

I really understand this whole issue about current release and this is why I wrote my comment. Current performance (due to Chromium's hterm) is so bad that we are currently moving to xterm.js: and this is really promising.

I have performance in mind for real world use case and robustness for edge case (what I clumsily called user's mistake).

As being said. hterm give us much more trouble to make performance improvement since we can't always monkey patch his rendering problem. Using xterm will give us much more availability and flexibility as the same time as being maintained on a more suitable base.

We do understand the need to have a better performance and we do take it seriously. And after the xterm switch. We will be able to do better performance improvement on the upcoming times.

Has there been any progress on this? I'm encountering this issue frequently and it's rather disruptive to my workflow. Looks like there hasn't been any motion in the xterm branch in about 2 months.

@insanityfarm I haven鈥榯 run into this issue anymore with v2 which afaik uses xterm now.

@insanityfarm xterm has been merged into our v2 branch. And this branch was renamed canary. You don't have to build Hyper from source anymore, to use xterm. You only have to set canary in your config as update channel.

Thank you for the information! I wasn't up to speed on the canary branch. Just updated and ran a quick test, and (so far) the issue seems to be resolved. Appreciate your help.

Hi guys, this issue is still present in 1.4.8

@cesarferreira see https://github.com/zeit/hyper/issues/687#issuecomment-341543935 above - you should be able to switch to the canary version ... using it without major issue since a while

Thanks for the reply @dennisroethig , how do I change to the canary version? there is no reference to stable version or something similar in my config

You can find instructions here: https://zeit.co/blog/canary#hyper

This is an ongoing issue even in v2.0-2.1...

@danielkalen I've just tested it with the current canary version, and the situation seems much improved.

The test above, with 10000x1000x馃嵎, which previously took 137 seconds is now done in 15 seconds, a 10-fold increase in performance.

iTerm does the same in 25 seconds, whereas Terminal.app is done almost instantly. So there's obvious room for improvement, but it's far less likely to actually interfere with work now.

iTerm does, however, better handle interruptions (control+c): it quits instantly, while Hyper finishes the output before handling the interruption. But that's tracked in #555, #1121, and #1484

@MatthiasWinkelmann https://github.com/zeit/hyper/issues/2449#issuecomment-417783187

Take a look at the gif I posted comparing terminal
And hyper. This was running on 2.1.0 canary

I opened this issue and it's still an ongoing issue for me. Am I right in the assumption that this is now tracked in #2449 ?

I don't know why this was closed and is now tracked in another issue, but so be it.

I have the same problem with Docker log on latest canary
image

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mofux picture mofux  路  68Comments

rauchg picture rauchg  路  67Comments

indutny picture indutny  路  46Comments

glockjt picture glockjt  路  50Comments

fabdelgado picture fabdelgado  路  102Comments