React: Streaming renderToStaticMarkup

Created on 1 Feb 2015  路  63Comments  路  Source: facebook/react

I've been messing around with trying to get my current project to cater the critical rendering path (i.e. the first ~14kb of data) ASAP. Our landing page is quite heavy in content and takes around 200ms for React.renderToStaticMarkup() to process, which is not the time we want the users to wait until they can get something on their screens. Currently I'm doing it so that it res.write()s everything up until the <body> tag first, then the content, finishing up with a bundled JSON dump of the data the server used to render the page and the ~1kb of JS for webpack bootstrapping. This gets me somewhere, as in you get background color and title in the first rountrip, but no content.

It would be neat if there was a version of React.renderToStaticMarkup() that returned a node stream, and an option to specify some function that tells the number of bytes to render before pushing stuff downstream, for example:

var chunksWritten = 0;
React.renderToStaticMarkupStream({
    getChunkSize: function () {
        if ( chunksWritten++ === 0 ) {
            // first push the first 13kb
            return 13 * 1024;
        } else {
            // push the rest in one big chunk
            return -1;
        }
    },
}).pipe(res);
DOM

Most helpful comment

It鈥檚 implemented.

import { renderToStaticStream } from 'react-dom/server';

works in 16 beta 3 (and later).

Try 16 beta: https://github.com/facebook/react/issues/10294#issuecomment-320113497

All 63 comments

I'm pretty sure this would require a huge rewrite for minimal gain. If you're not already caching the output and serving it from a cdn, you'll get much better results from that.

Yea, this would be a pain BUT I don't want to write it off completely. Streaming markup to a browser is definitely a thing that people do and our server-side rendering story isn't great.

It might turn out that it's not realistic with the way React is architected but if somebody wants to investigate, I'd be interested in hearing how it goes.

If you're not already caching the output and serving it from a cdn, you'll get much better results from that.

Unfortunately we're already using caches and hitting a CDN for pretty much everything we can, however the views have too much variation per country, screen size (we are estimating screen sizes based on UA and session on the server side) and user-specific information to benefit from caching.

Would be interesting to see experiments about this at least, even if they never made it to core. :)

@jussi-kalliokoski @zpao This should be quite simple for now, #1542... but it might not be in the future...?

@syranide yea, I was thinking of that PR. But even that has the big assumption that it's all flushed at once. I haven't worked with streaming markup and don't know what might have to change. Though if we're only doing it for renderToStaticMarkup it might not actually matter too much.

@zpao Well it's built like that, but replace array.push() with something internal and voila...?

I made something based on what @syranide did in #1542 to try it out and as a quick benchmark, I fed in res.write as the callback in my application and on my MBP the first ~13kb of the landing view was pushed downstream within <100ms whereas when rendering the whole document at once, it would be flushed downstream at ~280ms.

Seems like this would be great for renderToString too, not just renderToStaticMarkup.

a synchronous .renderToString() that takes > 50ms for trivial view hierarchies seems super-problematic.

are people just not hitting this or is there an obvious work-around (besides caching) that i'm missing?

a synchronous .renderToString() that takes > 50ms for trivial view hierarchies seems super-problematic.

@busticated I don't have any hard facts at the moment, but that makes no sense to me. Performance definitely is not that slow, they're probably running the DEV-version av React (which is far slower) and there's probably something else that is funky as well. I'm guessing he's not counting the DOM components either and generally have really large components.

Also, caching is and always has been the answer to this, it's not any different with React.

@syranide thanks. i should have noted that i'm actually seeing similar numbers. could very well be a result of something silly - config, etc - but afaik all that is required to disable 'DEV' mode server-side is NODE_ENV=production so at least that should be ruled out (side-note: docs here would be nice).

as for component size & count, i need to collect hard data but _my_ views are certainly no more complex than something like --> https://instagram.com/rei/

Performance definitely is not that slow, they're probably running the DEV-version av React (which is far slower)

Actually was not, but...

and there's probably something else that is funky as well.

Yes, my benchmark was way off, for multiple reasons - on our production server the render times even for the biggest pages are around 12ms (small sample size with manual test setup on one node just to see, will post more results once I have proper data from monitoring in production). I'm suspecting the main reason my tests were that skewed is the lack of JIT warmup.

I'm guessing he's not counting the DOM components either and generally have really large components.

Yes, there are some pretty large components and when I posted the initial numbers, there were also some string replaces that ran against massive (100kb-500kb) strings, contributing a big chunk of the time spent.

As a side note, I did some more experimentation on the streaming and it caused more trouble than it was worth and I ended up reverting the manual document splitting I was doing (turned out that connect-gzip doesn't like the machines in our environment).

@jussi-kalliokoski Ah, thanks for the update!

Why not use @syranide proposal at #1542 with a renderToStream method:

function renderToStream(component) {
  ("production" !== process.env.NODE_ENV ? invariant(
    ReactDescriptor.isValidDescriptor(component),
    'renderComponentToStaticMarkup(): You must pass a valid ReactComponent.'
  ) : invariant(ReactDescriptor.isValidDescriptor(component)));

  var transaction,
    Readable = require('stream').Readable,
    stream = new Readable(),
    processedFragments = 0,
    batchSize = 5;

  try {
    var id = ReactInstanceHandles.createReactRootID();
    transaction = ReactServerRenderingTransaction.getPooled(true);

    transaction.perform(function() {
      var componentInstance = instantiateReactComponent(component);
      componentInstance.mountComponent(id, transaction, 0);
    }, null);

    stream._read = function() {
      var fragments,
        end = batchSize,
        finish = false;

      if(processedFragments + batchSize > transaction.markupFragments.length) {
        end = transaction.markupFragments.length;
        finish = true;
      }

      fragments = transaction.markupFragments.slice(processedFragments, end);
      processedFragments += end;
      stream.push(fragments.join(''));
      finish && stream.push(null);
    }

  } finally {
    ReactServerRenderingTransaction.release(transaction);
  }
}

The idea is only concat the string in chunks soon the stream need it. This will avoid the event loop to be blocked even for a massive string (100kb-500kb). A more elaborate strategy will take in consideration the buffer size requested by the stream.
@jussi-kalliokoski, does this strategy match with your experiments?

I show the benefits of streaming responses at https://m.youtube.com/watch?v=d5_6yHixpsQ&feature=youtu.be @ 4:05.

Third party APIs or heavy database requests can cause this.

I used dust.js to work around this, but it'd be great if react could do it.

Also, streaming is coming to the browser (https://streams.spec.whatwg.org/), so this could become a benefit on the client too.

@jakearchibald Can you help me understand how streams could help on the client for us? Would there be some way to set innerHTML to a stream or were you thinking of something else?

@spicyj once streams land in the browser, stream-supporting APIs will follow. I'm pretty certain, given the browser can already do it but it's not currently exposed, we'll be able to get a streaming document fragment. Maybe @domenic has discussion links on that?

In the video above I write to innerHTML twice, once with a partial result, and again with the full result. Horribly hacky, but it did improve render time.

It's pretty tentative, but the current thoughts are along these lines.

Thanks for the references.

I did some stress test to my isomorphic application and I found another issue related with this topic.
By default (in my laptop) express handles 2000~2200 request/second but in my app only handles 70~100 request/seconds.

After profiler it I found that .renderToString()/renderTotStaticMarkup blocks the event loop for 26-34ms and for this reason express dispatch x20 less request/second.

@AbrahamAlcaina Dynamically generating anything complex on-demand is obviously going to be costly, you'll need implement caching or pre-build static responses yourself.

@syranide caching is always a good idea but in this case I'm rendering a timeline for a user.
Each user has own timeline.

It will be a good idea improve the performance of this method to less that 5 ms.

Take in mind that for render to string isn't needed to keep track of the changes of state, props, etc. Remove these checks will improve the performance.(idea)

@AbrahamAlcaina Yes, we hope to implement some optimizations like that.

thanks guys @spicyj @syranide and good job.
:+1:

hey guys, any ideas on how to improve the renderToStaticMarkup method? Here it's always going to 50ms and we don't see how it could be more than 5ms. The problem with such slow CPU bound method is that it makes all express requests to be enqueued and super slow.

just to be more clear: React rendering pages at 50ms is a limitation and our team is considering moving to marko before our React codebase becomes too big to move. If anyone here lives in palo alto, we'll be happy to give more feedback in person.

@mufumbo We don't have any easy wins to suggest right now, sorry. We're planning to spend time working on performance over the next six months or so and hope to improve this but it sounds like React might be too slow for server rendering for you right now.

@spicyj do you have any quick pointers on where to look in the source code, if we want to try an optimization? Low hanging fruit kinda stuff?

React is unviable as a server-side rendering. It accumulates CPU bound requests and makes node to be in a huge delay. Example: every request takes 50ms to load (weird it doesn't matter on CPU speed, there's any Thread.sleep in the code?), and this means that next request will take 100ms, next will be 140ms, next 250. With 5 req/s it will become quickly an 1s delay.

I would highly advise putting a large warning in the React homepage saying it's unviable for server-side rendering and there might be a solution only in 6 months. We love React and this would ensure the reputation of the project is kept intact for people working on real projects and not only for demo tutorials. Also, it will make it sure people refer back to that warning when they write blog posts about using react as an isomorphic solution.

@mufumbo: As @spicyj said, we're not ready to give actionable advice on ways to make rendering faster, but it is something we're actively working on. I've literally spent the last two weeks pouring through flame graphs (not my idea of a vacation).

The lowest hanging fruit:

  • Don't keep any internal state when generating the initial mount images. Currently React*Component.mountComponent side effects and is not idempotent. All the state created is unnecessary when rendering to static markup, so tracking that state is a waste of cpu cycles.
  • Don't generate a string. Streaming the output is not only good for sending the partial renders; it's also good globally and is measurably cheaper than returning a string.
  • Flatten the call tree. You're paying approximately one thousandth of a millisecond for every function call. Call them thousands of times, and the milliseconds start to add up. All the injected plugins are hurting performance.
  • Turn off sanity checks. Our security/sanitization protection is there to keep you safe, but if you're 100% confident in your code, turning it off has very measurable performance benefits.
  • Turn off checksumming (https://github.com/facebook/react/issues/4401). Again, it's there for your protection, but if you're ready to take off the guards, you can save a few cpu cycles.

But realistically, none of that is actionable advice for you because the changes either require large code refactors that are extraordinarily complex (work in progress), or require disabling features that we believe are critical to the practicality of React.

We will continue to work on this, and we'll continue to post on github when we have useful information to disseminate. In the mean time, just hold tight and know that we're doing everything we can to make React even faster!

thanks a lot @jimfb @spicyj ! We love to work with react and will try to hold on.

We'll try the tips. My feedback is: UX and performance is a priority for us. Unfortunately the current architecture is unreliable to our users because we can't predict req/sec to a certain server.

Would you be kind to share how's people serving react requests through the server-side rendering? I can't see it being reliable if it's served through a simple nodejs + express config. Is people making it sure each the server is handling only one request each time?

@jimfb does renderToStaticMarkup also avoids checksumming or it only excludes the checksum data from the html?
I also can't seem to find anything on how to disable sanity checks in the current docs, I suspect it's not an option yet.
Thanks!

@mufumbo Are you using require('react') package directly from npm? If so, there is an unfortunate perf problem due to how NODE_ENV is checked (see #812). If you require react/dist/react.min.js instead (and react/dist/react.js in development for better warnings), you may have vastly improved perf. One way to do this is to make node_modules/react.js a file like this:

// node_modules/react.js
if (process.env.NODE_ENV === 'production') {
  module.exports = require('react/dist/react.min.js');
} else {
  module.exports = require('react/dist/react.js');
}

Let me know if that helps at all.

@spicyj yes! From our preliminary tests it seems that this decreased the mean render time from 50ms to ~38ms!

Please, let me know if you have any other ideas like this. We're trying everything we can to unblock the event loop. Thanks a lot!!

That's probably the only easy win I have to suggest, sorry. :)

@spicyj I did some test using react.min and the performance increase x2.
It's a good new, but it still far from 2000 req/seg.

TLDR;

One current user

ab -n5000 -c1 http://localhost:3000/
5000 request 1 current user

Before change

Requests per second: 86.08 #/sec
Time per request: 11.617 ms
Time per request: 11.617 ms
Transfer rate: 12802.78 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 1
Processing: 9 11 2.6 11 115
Waiting: 9 11 2.6 11 115
Total: 10 12 2.6 11 116

Percentage of the requests served within a certain time (ms)
50% 11
66% 12
75% 12
80% 13
90% 14
95% 15
98% 16
99% 19
100% 116 (longest request)

With react.min

Requests per second: 169.22 #/sec
Time per request: 5.909 ms
Time per request: 5.909 ms
Transfer rate: 25168.62 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 3.6 0 151
Processing: 5 6 1.1 5 23
Waiting: 5 6 1.1 5 23
Total: 5 6 3.8 5 159

Percentage of the requests served within a certain time (ms)
50% 5
66% 6
75% 6
80% 6
90% 7
95% 8
98% 9
99% 11
100% 159 (longest request)

50 current users

ab -n5000 -c50 http://localhost:3000/
5000 request 50 current users

Before

Requests per second: 88.87 #/sec
Time per request: 562.625 ms
Time per request: 11.252 ms
Transfer rate: 13217.72 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 15.1 0 1065
Processing: 30 562 30.1 555 1071
Waiting: 16 309 153.8 309 646
Total: 33 562 33.5 555 1603

Percentage of the requests served within a certain time (ms)
50% 555
66% 565
75% 568
80% 570
90% 607
95% 617
98% 638
99% 646
100% 1603 (longest request)

Changed

Time per request: 254.816 ms
Time per request: 5.096 ms
Transfer rate: 29184.01 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 2
Processing: 125 254 15.7 251 327
Waiting: 19 196 36.4 196 325
Total: 127 254 15.6 251 327

Percentage of the requests served within a certain time (ms)
50% 251
66% 255
75% 256
80% 258
90% 261
95% 274
98% 317
99% 325
100% 327 (longest request)

100 current users

ab -n5000 -c100 http://localhost:3000/
5000 request 100 current users

Before

Requests per second: 88.73 #/sec
Time per request: 1127.068 ms
Time per request: 11.271 ms
Transfer rate: 13196.31 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.5 1 4
Processing: 90 1125 42.1 1115 1200
Waiting: 48 632 304.9 632 1193
Total: 94 1126 42.0 1116 1200

Percentage of the requests served within a certain time (ms)
50% 1116
66% 1130
75% 1147
80% 1156
90% 1178
95% 1191
98% 1198
99% 1199
100% 1200 (longest request)

Changed

Requests per second: 194.63 #/sec
Time per request: 513.796 ms
Time per request: 5.138 ms
Transfer rate: 28947.57 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 3.5 0 112
Processing: 154 511 27.0 506 751
Waiting: 55 369 101.5 393 589
Total: 154 512 27.3 506 861

Percentage of the requests served within a certain time (ms)
50% 506
66% 511
75% 516
80% 516
90% 541
95% 572
98% 591
99% 593
100% 861 (longest request)

BTW Netflix, changes his render pipeline to react http://techblog.netflix.com/2015/01/netflix-likes-react.html
They decrease the render time 70%.

@spicyj Are you in contact with Netflix? Could they did some change/improvement in the server render engine.

hey guys, using renderString instead of renderToStaticMarkup shouldn't improve client-side performance? Here it's taking a full 1 second from the moment when react starts parsing to it's completion. Any ideas on how to improve this? It comes with all kinds of bad user experience, like blocking scroll and phone heat.

A general solution to the blocking concurrent requests is to break your app into two applications: the web server, and the rendering service. You can communicate between them via message queues which are non-blocking.

For a fair test run it on a server with at least 16 cores and try different numbers of worker processes (e.g. 16, 32, 64, 128) and a web server for each core and prerun the tests several times to warm the jit.

@brigand if the render has to be done with node, wouldn't that just move the problem to the service that is processing the queue?

@mufumbo it changes the problem from blocking the web server event loop to how well the cpu parallelizes work from multiple processes. If you use a finite pool of processes (not sure if this is the way to go), the worst case is that you develop a backlog. If you do, it means that you need more servers. Because the event loop isn't blocked, you can send the necessary messages to cause autoscaling to kick in.

Maybe I'm wrong about this; I'm mainly a frontend developer.

Edit: to clarify, I'm not saying that the performance is fine, I'm suggesting a way for the performance to not cripple your web servers.

@brigand that will be the same than use node clusters. But with node cluster you don't pay the price of data commute between the web server and the queue. And you should be fine to scale, since the web server is stateless.
One solution is break you page on personalized and non-personalized parts. You can cache the rendered HTML from the non-personalized parts and send it to the client right away plus the data for the personalized parts. On the client you show the non-personalized parts and render the personalized part soon you finish to receive the data that you need for that. That should give your servers a little bit of breath.
Of course, the requirements for each product should take in consideration. It is not a silver bullet.

While I think this is a great discussion, we've wandered way off the topic of "streaming markup". I encourage you to use an alternate venue for this discussion so that we can make sure people watching the repo can maintain a better signal to noise ratio. We set up https://discuss.reactjs.org precisely for discussions like this :)

@jimfb can you help us figure out how to disable checksum? We can't find anywhere in the documentation. Also sanity checks?

On the topic of streaming renderToString and renderToStaticMarkup, I wrote a library over the last couple weeks to do that; I'm calling it react-dom-stream:

https://github.com/aickin/react-dom-stream

It's very new, and probably has some issues, but it passes all of the rendering tests that react-dom currently passes. Some preliminary performance microbenchmarks have shown it to massively reduce TTFB for large responses, and in real world conditions, TTLB goes down as well: https://github.com/aickin/react-dom-stream-example

I intend to work more on the library over the coming weeks, hopefully including adding the ability for the server-side render to yield control back to the Node event loop. This should help with an operational difficulty of SSR where long pages can starve out small pages for CPU.

Any feedback would be welcome. Thanks!

ETA: the renderer is actually a fork of react-dom, which I'd be happy to give back to the react project, but I have no idea if they'd want it.

@aickin excellent ! Is there a drawback with using your solution at the moment ? (Other than it is alpha) I'm happy to help test it. Also it might help to document what kind of changes you made to the standard ReactDOM to support streamable components.

Regarding yielding control to the event loop, does that mean when streaming a component down, node is able to handle other requests?

@aickin This is super awesome! Also curious about the above @geekyme's questions. I've got a server-rendered app with average response times of 100-200ms that I'd love to try this on when it's stable enough!

@geekyme I think the core drawback is that it hasn't been battle tested. The core of it is the ReactDOM code, and it passes all the ReactDOM tests, so it's in OK shape, but I'm sure that there are wrinkles to find. Also, if your site only serves up tiny pages (say, 10K), it probably won't do much good, as renderToString probably isn't a big time sink.

Most of the changes are in this commit, and it mostly involves changing methods in the render path to take in a stream rather than return a string. What kind of documentation would you like?

If I implement yielding (which, to be clear, I haven't yet!), yes, it would allow node to switch in between requests. This should be especially good for sites that have a mix of large, slow pages and small, quick pages. On those sites, small pages can get starved for CPU by the big pages.

@spicydonuts Thanks! And is there any way I could convince you to give it a try and send me some feedback? ;)

@aickin I may just do that : ] Just need to make sure I'm watching for the right issues while it runs. Any recommendations other than watching latency and general exceptions?

@spicydonuts Great questions! Off the top of my head, I'd say:

  • latency, both TTFB of the React component and TTLB
  • exceptions
  • making sure that you don't get console errors when connecting React on the client side
  • time to first paint in the browser

Might be worth moving discussion about your specific project out of here and into that project's issues or a discussion forum like the one @spicyj linked to above to avoid notifying a bunch of people here. The project is relevant to this issue but the ensuing discussion isn't quite. Definitely keep us in the loop when you feel it's stable though.

@zpao Apologies (and thanks for the gentle guidance!).

There is a thread at https://discuss.reactjs.org/t/react-dom-stream-a-streaming-server-side-rendering-library-for-react-as-much-as-47-faster-than-rendertostring/2286 for anyone who might want to continue to chat about react-dom-stream.

I wonder how one of the main trending technologies is not yet implemented. The idea of MVP is around us ever in well-known libraries and frameworks we use.
Streaming rendering is the one natural solution for Node. How one could demand other way?..

Are any plans to have working renderToStream() included in React? In 6 months? In a year?

Hi there!
Is there any updates on this?

We will update issues when we have updates. If you don't see anything from us, it's safe to assume there are no updates.

That's a bit snarky. Many people are interested in this and don't want to risk it getting deferred due to a presumed lack of interest. And three months is probably a reasonable amount of time to wait before checking to see whether it's still be worked on.. At least it wasn't a +1.

At least it wasn't a +1.

haha true that. :smile:

If you were on a team that had the popularity and weight of this Backlog of work public work to do, how would you respond?

(not meant to be critizing of any parties at all; I suspect I would also have a similar response to @zpao if I was in his shoes.)

And three months is probably a reasonable amount of time to wait before checking to see whether it's still be worked on.

Nowhere in this thread did anybody from React team say they worked on this. react-dom-stream is a third party project by @aickin, so feel free to try it and assess its stability and performance.

It is great that people explore this space but I want to clarify that streaming renderToStaticMarkup() is not currently being worked on by anyone on the React team to the best of my knowledge. The core React team has limited resources, and other issues are currently a bigger priority to Facebook.

Subscribe to this issue if you are interested in the updates but please consider other people subscribed to this issue who don鈥檛 want to receive this kind of back-and-forth talk. We will definitely post updates when and if there are any. In the meantime please feel free to contribute to the relevant discussion thread. Thank you!

@gaearon thank you, Dan! Appreciate the truth. Hope they will find time for this or someone writes a good PR. When things will be different, hope they announce the changes here in this issue.

It鈥檚 implemented.

import { renderToStaticStream } from 'react-dom/server';

works in 16 beta 3 (and later).

Try 16 beta: https://github.com/facebook/react/issues/10294#issuecomment-320113497

Yay, lovely! Thank you all for the great work!

Was this page helpful?
0 / 5 - 0 ratings