Lighthouse: Something went wrong with recording the trace over your page load. Please run Lighthouse again. (NO_FCP)

Created on 6 Jun 2020  Â·  3Comments  Â·  Source: GoogleChrome/lighthouse

Provide the steps to reproduce

  1. Run Lighthouse audit from Chrome Dev Tools or Command Line for https://yerbba.com/

(P.S. in localhost it runs just fine)

What is the current behavior?

Error – "Something went wrong with recording the trace over your page load. Please run Lighthouse again. (NO_FCP)"

What is the expected behavior?

A normal audit report

Environment Information

  • Affected Channels: CLI and DevTools (although I suspect all because I haven't tested the rest)
  • Lighthouse version: the latest (I just npm installed)
  • Chrome version: 83.0.4103.97
  • Node.js version:
  • Operating System: MacOS 10.15.5
needs-priority

Most helpful comment

Yes they're almost definitely related. We've seen a few times that companies have hired a firm that does website "security" but whatever strategy the firm used to do that added overzealous User-Agent sniffing and rejection techniques like this that prevents bots from fetching the site.

I'd start with the earliest possible chain in your deployment (directly hit the server on the same box first, then with your reverse proxy/cdn in front, then with the next layer added, etc) testing using curl with the user agent above to make sure you get the expected payload until you find the thing that is rejecting the requests.

All 3 comments

Thanks for filing @belfigue! This page serves an empty page to Lighthouse.

<!DOCTYPE html><html><head>
  <link rel="stylesheet" type="text/css" class="__meteor-css__" href="/merged-stylesheets.css?hash=5fd81aec5c40955a6fa98878b0a95b6484008b81">
<meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta name="theme-color" content="#000000">
</head>
<body>
</body></html>

It seems to specifically detect Chrome-Lighthouse user agent and not render the real page. You can verify this yourself with curl...

curl -vvvv -H 'User-Agent: Mozilla/5.0 (Linux; Android 7.0; Moto G (4)) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4143.7 Mobile Safari/537.36 Chrome-Lighthouse' https://yerbba.com/

vs.

curl -vvvv -H 'User-Agent: Mozilla/5.0 (Linux; Android 7.0; Moto G (4)) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4143.7 Mobile Safari/537.36' https://yerbba.com/

We strongly discourage any UA sniffing of Lighthouse to alter the page.

Thank you @patrickhulce.

We don't know what is causing this behavior. Do you have any recommendations on what could be causing it or how we could go about identifying the culprit?

We've noticed that Google and Bing can't crawl our page either. Maybe they too are being blocked by something in our code. In fact, their cached versions show exactly the same snippet you posted above. Do you think these issues could be related?

Yes they're almost definitely related. We've seen a few times that companies have hired a firm that does website "security" but whatever strategy the firm used to do that added overzealous User-Agent sniffing and rejection techniques like this that prevents bots from fetching the site.

I'd start with the earliest possible chain in your deployment (directly hit the server on the same box first, then with your reverse proxy/cdn in front, then with the next layer added, etc) testing using curl with the user agent above to make sure you get the expected payload until you find the thing that is rejecting the requests.

Was this page helpful?
0 / 5 - 0 ratings