I'm not super familiar with the release management workflow of Gatsby but was just wondering if the team has considered integrating Lighthouse as part of the CI process somehow, maybe for the starter / sample projects?
In keeping with being blazing fast 鈩笍 , ensuring performance metrics thresholds are upheld in an automated fashion would help with sustaining that.
Although I haven't personally implemented Lighthouse via the CLI, my thought would essentially be something along the lines of
I am aware of some changes to the CI process being discussed here so wanted to make sure those are considered here.
I should add I am happy to help with any effort towards this if it is desired. 馃憤
Definitely interested in this. Not sure right now the best approach as adding this to TravisCI sounds like it'd slow things down a lot which we don't want.
Another thing to note is that lighthouse scores aren't always the same given the same site. Maybe we'd have to keep track of a running average for scores? But it'd be great to know about PRs that reduce performance scores across all the example sites.
Good points @KyleAMathews / @m-allanson !
I'm going to review #6534 and make sure I am all caught up on the current build process. I haven't worked with a monorepo before so I figure familiarizing myself with the technical details will be needed on my end as well.
Thanks for supporting this at an idea at least and looking forward to bringing more info back to this thread!
I could help with this if you like. It's best to run it in a docker container as a separate travis stage. The scores might vary but you should test >80 and we can also test part of the audits so we don't look only at the scores.
Note that lighthouse startup and run might take 2-5min on Travis inside a container
Hey @wardpeet , that sounds good! Admittedly I have let this one slide and probably wouldn't have time to pick up again until mid Oct.
Feel free to give it a go! 馃憤
@m-allanson
Another thing to note is that lighthouse scores aren't always the same given the same site. Maybe we'd have to keep track of a running average for scores?
That's a good point. I suppose it would have to be targeted to a couple of the key starter repos and / or maybe a standalone Gatsby repo that adopts a "kitchen sink approach" that can really exercise the core with a handful of common plugins to ensure scores stay healthy / consistent across a representative sample size.
But it'd be great to know about PRs that reduce performance scores across all the example sites.
This task could be seen as a "canary in the coal mine" step as part of an over testing plan for the Gatsby core. Always building a few sites and having their scores out in the open as it were, can provide verification at any given time that Gatsby is indeed _blazing fast_ 鈩笍 馃敟
Old issues will be closed after 30 days of inactivity. This issue has been quiet for 20 days and is being marked as stale. Reply here or add the label "not stale" to keep this issue open!
This issue is being closed due to inactivity. Is this a mistake? Please re-open this issue or create a new issue.