https://d13z.dev using lighthouse@next npm packagenullAfter running lighthouse from a nodejs script, the performance score result is null.
Other categories have their score calculated properly.
If the score can not be calculated, it should throw an error or warning. Otherwise, it should calculate and save the performance score properly.
Result of the report: https://api.jsonbin.io/b/5e7a162079d7e24dd30e365d
report: https://googlechrome.github.io/lighthouse/viewer/?gist=d3b79a47e01590171bd7536f2980c145
package.json
{
"version": "1.1.1",
"main": "build/server.js",
"private": true,
"license": "MIT",
"scripts": {
},
"engines": {
"node": "^12.9.1"
},
"lint-staged": {
"*.{js,jsx,ts,tsx}": "eslint"
},
"husky": {
"hooks": {
"pre-commit": "pretty-quick --staged && lint-staged"
}
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^2.15.0",
"@typescript-eslint/parser": "^2.15.0",
"cross-env": "^5.2.1",
"devtools-protocol": "0.0.692805",
"eslint": "^6.8.0",
"eslint-config-prettier": "^6.9.0",
"eslint-config-standard": "^14.1.0",
"eslint-plugin-import": "^2.19.1",
"eslint-plugin-node": "^10.0.0",
"eslint-plugin-prettier": "^3.1.2",
"eslint-plugin-promise": "^4.2.1",
"eslint-plugin-standard": "^4.0.1",
"husky": "^3.1.0",
"lint-staged": "^9.5.0",
"prettier": "^1.19.1",
"pretty-quick": "^1.11.1",
"ts-node": "^8.5.4",
"typescript": "^3.7.4"
},
"dependencies": {
"@types/bluebird": "^3.5.29",
"@types/puppeteer": "^1.20.3",
"@types/request": "^2.48.4",
"@types/request-promise": "^4.1.45",
"aws-sdk": "^2.600.0",
"bluebird": "^3.7.2",
"express": "^4.17.1",
"fs-extra": "^8.1.0",
"interval-promise": "^1.3.0",
"lighthouse": "^6.0.0-beta.0",
"puppeteer": "^1.20.0",
"request": "latest",
"request-promise": "^4.2.5"
}
}
Related issues

Thanks for filing @dvelasquez! I'm not able to reproduce this issue, so is it possible it only happens intermittently?
The root issue here is that the browser did not emit a Largest Contentful Paint event in your run. When we can't compute one of the metrics it's expected that the performance score is null (i.e. a key component of the score is missing, any number we return is meaningless so we don't). We don't throw a fatal error because as you note all the rest of the categories and audits within the performance category are working fine.
Lighthouse core, I wonder if we should come up with an LCP fallback mechanism like we did in the early days of FMP. It's definitely a rough situation to fail the entire performance category while this new metric impl has some kinks to work out.
So it seems that the problem was in the other dependencies (maybe puppeteer, most likely devtools-protocol), but after upgrade all the dependencies, the performance score was working again.
Thanks @patrickhulce for your time!
maybe puppeteer,
Ah if you were using an older version of Chromium that ships with puppeteer then that would absolutely explain it :)
I think a null value will always be due to older versions of Chrome (needs citation). We could emit a warning specifying that.
I think a null value will always be due to older versions of Chrome (needs citation). We could emit a warning specifying that.
Well null performance scores can happen for lots of different runtime error reasons, but I like the warning for old version of Chrome. We could check the version we fetch and emit an error if it doesn't meet our minimum version?
yeah lets do that
Just to clarify: is this just for NO_LCP due to old Chrome or for old Chrome in general?
A new LHError "your chrome is too old to support this metric/audit" could also be reused by other future new audits. OTOH, a top level warning if Chrome is less than some minimum seems like a good idea too. Maybe whynotboth.gif?
a top level warning if Chrome is less than some minimum seems like a good idea too.
I thought this is what we were doing. But a new LHError for Chrome is too old sounds good to me too :)
Most helpful comment
I think a null value will always be due to older versions of Chrome (needs citation). We could emit a warning specifying that.