React-devtools: Large amount of data in component props causes devtools to peg CPU

Created on 28 Oct 2018  ·  21Comments  ·  Source: facebook/react-devtools

Problem:
If a React component has a large amount of data passed in props, even by reference, the CPU gets pegged after the component renders.

Repro

  • Create an array or object structure with a large amount of data, for example 10-20MB.
  • Pass the array variable as the props of a component.
  • Render the component.

This produces the following error:

Uncaught Error: Message length exceeded maximum allowed length.
    at PortImpl.postMessage (VM27 extensions::messaging:121)
    at Port.publicClassPrototype.(anonymous function) [as postMessage] (extensions::utils:138:26)
    at handleMessageFromPage (contentScript.js:24)

The bottom function is this:

    function handleMessageFromPage(evt) {
        evt.source === window && evt.data && "react-devtools-bridge" === evt.data.source && port.postMessage(evt.data.payload);
    }

Here's the current implementation of postMessage which is currently stringifying all the props for the component. Even if they're passed by reference, JSON.stringify still has to process all the data.

  PortImpl.prototype.postMessage = function(msg) {
    if (!$Object.hasOwnProperty(ports, this.portId_))
      throw new Error(kPortClosedError);

    // JSON.stringify doesn't support a root object which is undefined.
    if (msg === undefined)
      msg = null;
    msg = $JSON.stringify(msg);
    if (msg === undefined) {
      // JSON.stringify can fail with unserializable objects. Log an error and
      // drop the message.
      //
      // TODO(kalman/mpcomplete): it would be better to do the same validation
      // here that we do for runtime.sendMessage (and variants), i.e. throw an
      // schema validation Error, but just maintain the old behavior until
      // there's a good reason not to (http://crbug.com/263077).
      console.error('Illegal argument to Port.postMessage');
      return;
    }
    var error = messagingNatives.PostMessage(this.portId_, msg);
    if (error)
      throw new Error(error);
  };

Notes/Thoughts

  • A large amount of data in props passed by ref can be a legitimate use case, for example using Vega for dataviz. Usually this isn't a problem since they're passed by ref.
  • Are there any other ways to serialize arbitrary JS objects instead of JSON.stringify, for example using Apache Arrow?
  • If the props are only used for display to the user in the inspect or profiler ui, would it make sense to just send the first 10-20 elements in an array and tell the user that the data was clipped? Not sure how you'd handle this with arbitrarily large objects though.
performance

Most helpful comment

Thanks for the update.

I think my answer to the existing performance problems will end up being a DevTools rewrite https://github.com/bvaughn/react-devtools-experimental

All 21 comments

This might be a good option to serialize instead: https://github.com/google/flatbuffers
Arrow uses flatbuffers under the hood.

Hi :-)

This issue sounds like what is preventing us from using react-devtools with mozilla/treeherder.
(Happy to file as a separate issue if preferred?)

STR:

  1. Clone https://github.com/mozilla/treeherder (tested using 3ad320d)
  2. yarn
  3. yarn start
  4. In the new browser tab opened by webpack-dev-server (which should end up redirecting to http://localhost:5000/#/jobs?repo=mozilla-inbound), open devtools and switch to the React tab
  5. Switch to the "Profiler" sub-section within the React tab
  6. Click "Start profiling" and then 2-3 seconds later, click "Stop profiling"

Expected:

  • Browser behaviour / responsiveness is roughly the same as when not using react-devtools
  • No console errors from react-devtools
  • A recording is created and the results of which can be viewed

Actual:

  • "Connecting to React…" takes 30+ seconds (and sometimes doesn't complete at all)
  • Browser extremely unresponsive (eg scroll action takes 10-20 seconds to respond)
  • High Chrome CPU/RAM usage
  • No recording shown after pressing "Stop profiling"
  • Console errors:
Uncaught Error: Message length exceeded maximum allowed length.
    at PortImpl.postMessage (VM1326 extensions::messaging:121)
    at Port.publicClassPrototype.(:5000/anonymous function) [as postMessage] (extensions::utils:138:26)
    at handleMessageFromPage (contentScript.js:24)

Versions:

  • React 16.6.1
  • Chrome beta 71.0.3578.44
  • Chrome React Developer Tools 3.4.2 (from store)
    (Also reproduces using Firefox Nightly + Firefox React Developer tools)

The app needs some performance work (it's a legacy app that has just undergone a conversion from AngularJS 1.x to React; we're wanting to explore things like react-window next) which would presumably also help avoid this issue -- but it's hard to do so when unable to use react-devtools to start with :-)

cc @camd

This might be a good option to serialize instead: https://github.com/google/flatbuffers
Arrow uses flatbuffers under the hood.

@vaughnkoch – I'm not sure how flatbuffers could be used with React DevTools. I _think_ they require a schema to share data, where as the data passed across the bridge with DevTools doesn't really have a schema. (It varies by React version at a high level, and by application at a deeper level.)

Maybe I'm missing or misunderstanding something though? 😁


we're wanting to explore things like react-window next

@edmorley – That's neat to hear 😄 I'd love to hear how those explorations go.


@edmorley – I can reproduce the very long "Connecting to React…" message. I did a little profiling and it looks like there are some really long-running functions causing the delay:
screen shot 2018-12-04 at 1 05 07 pm

Most of this seems to be due to lots of GC:
screen shot 2018-12-04 at 1 06 52 pm

I updated react and react-dom to the latest canary release to see if this recent change (facebook/react/pull/14383) improved things any– and although I did see a noticeable performance improvement, I would still consider it pretty much unusably slow.

I'm also seeing an "_Uncaught Error: Message length exceeded maximum allowed length._" as reported after a few seconds of using the app (when DevTools is enabled). Note that this doesn't always happen, but I have seen it happen.

Even with React DevTools _disabled_ I'm still seeing a lot of long-running script jank up front:
screen shot 2018-12-04 at 1 17 19 pm

But there seems to be a lot less GC thrashing.


Anyway, this is mostly just notes for future me ^ I'm going to take a step back and look at what we might be able to do to reduce DevTools overhead. I'll revisit this example app to test out anything I come up with.

I'm experiencing this when moving between routes. Is there a quick hack possible so the devtools are somewhat usable? My poor MBP 2015 is not even capable of running a performance trace without imploding.

s there a quick hack possible so the devtools are somewhat usable

Not that I'm aware of.

Seems unlikely.

However, unlike pass-by-reference, the 'version' from the calling context is no longer available once transferred to the new context.

Also not sure if this type would be flexible enough for things DevTools needs to send/receive.

Hmmm - maybe first doing a deep copy of evt.data.payload before sending? It will be lots faster than JSON.stringify still.

I think that if devtool data can withstand a JSON cycle, it should be able to handle anything?

It will be lots faster than JSON.stringify still.

Any interest in prototyping it and coming up with some real perf numbers?

DevTools uses different serialization techniques depending on how it's being used. The standalone version used for React Native uses JSON.stringify but the web extension in Chrome/Firefox uses postMessage (which the browser serializes itself).

I have been thinking about this more, even if there was a 0-copy interface, it wouldn't help much.

Thanks to single-responsibility and the new Context api, my react tree has a depth of 200+ (I just sampled a random field and counted parents). In production this works just fine, and the maintainability of the code is wonderful.

However, each component will have some shared value or context, and if I understand correctly, all of this data is copied to the extension. Since it's being copied, it loses the referential integrity.

So unless I'm misunderstanding how this works, the only way to make it efficient is to only provide the values when requested by the extension?

Potentially. I had two ideas that might be worth trying, but I haven't tried either (or thought about either in much depth).

  1. We could serialize/send less data for each React commit by making props and state objects lazy (like we already do in some cases for nested values) and– rather than send them automatically, wait until the extension requests them.
  2. We could send data less often by throttling the updates (rather than serializing for each commit). This would probably be fine for the Elements view, since eventual consistency is probably fine for that kind of UI. This might have implications for the Profiler that would require more thought.

I tried a small change to make the bridge more aggressively limit the amount of data it sends by default. Perf still sucks overall for the treeherder use case, to be clear, but I think it may have improved a good bit. Interested in trying it out?

I'll upload a prerelease build of the extension, along with instructions on how to install it, here:
https://react-devtools.now.sh/

Right, it seems faster generally, but it still crawls when navigating a list/detail view with lots of nodes and data…

@wmertens is your comment regarding the treeherder app? I assume so.

@bvaughn no, my own app that probably has very similar characteristics.

Good news: I found that I was inadvertently re-rendering way too much; I was using react-deep-force-update to achieve what the new Context API now achieves, and switching to that fixed the slowness for my use case.

But the proposed ideas seem like they would help performance anyway.

I posted the serialization threshold tweak, along with some perf profile data, as PR #1258. In my testing, this drastically improves performance for the treeherder repro– (but perf is still overall not great).

Unfortunately I think I'm going to have to back the above change out because of #1262 – at least until/unless I can think of a way to serialize more aggressively for profiling data, which would require architectural changes since all bridge traffic is treated equal. 😦

@bvaughn sorry for not replying sooner, I got pulled away onto a few other things. Thank you for looking into this previously - we've since updated to latest React 16.7.x which includes facebook/react/pull/14383 and are working on improving the overall app performance.

Thanks for the update.

I think my answer to the existing performance problems will end up being a DevTools rewrite https://github.com/bvaughn/react-devtools-experimental

React DevTools has been rewritten and recently launched a new version 4 UI. The source code for this rewrite was done in a separate repository and now lives in the main React repo (github.com/facebook/react).

Because version 4 was a total rewrite, and all issues in this repository are related to the old version 3 of the extension, I am closing all issues in this repository. If you can still reproduce this issue, or believe this feature request is still relevant, please open a new issue in the React repo: https://github.com/facebook/react/issues/new?labels=Component:%20Developer%20Tools

Was this page helpful?
0 / 5 - 0 ratings