public-roadmap icon indicating copy to clipboard operation
public-roadmap copied to clipboard

Access to the complete Lighthouse metrics

Open dschmidtadv opened this issue 3 years ago • 9 comments

💡 For general support requests and bug reports, please go to checklyhq.com/support

Is your feature request related to a problem? Please describe. We are not able to access Lighthouse report details from the API when executing browser checks.

Describe the solution you'd like We would like to be able to access Lighthouse report details so we can fail tests when performance degrades.

Describe alternatives you've considered Some data is available thru the performance.getEntriesByName object, we are using this as a workaround.

dschmidtadv avatar Sep 29 '20 13:09 dschmidtadv

@dschmidtadv thanks for reporting this. We will have a look if we can enable this. Our main concern is how we can make this a nice experience for the user. And extra insights on your use case would be very valuable.

tnolet avatar Sep 30 '20 09:09 tnolet

This would be a really valuable addition (I've noticed it being unavailable from the API recently), but I guess the issue is how verbose the JSON response might be if including that information.

You're able to console.log out accessibility info at the moment which is seemingly unavailable in the API otherwise, e.g.

const snapshot = await page.accessibility.snapshot();
console.log(JSON.stringify(snapshot, null, 2));

coderkind avatar Dec 28 '20 11:12 coderkind

Ah man I'd love this.

Though one option is just surfacing the information for us to use in our pupetteer scripts. A first party solution that integrates with dashboards/screenshots etc would be incredible.

E.g. the status pages feature v2 could include rolling averages for first paint etc. Pew pew.

StanLindsey avatar Feb 28 '21 22:02 StanLindsey

@StanLindsey @coderkind I hear you and we are looking into this! Our biggest concern regarding offering this service is the problem with "variability" See https://developers.google.com/web/tools/lighthouse/variability

This means we have to first find an infrastructure solution that gives dependable, stable metrics without making it ridiculously expensive: this is a problem with many services that run Lighthouse in Lambda or equivalent FaaS infrastructure.

tnolet avatar Mar 02 '21 09:03 tnolet

This means we have to first find an infrastructure solution that gives dependable, stable metrics without making it ridiculously expensive

@tnolet does the infrastructure need to be more dependable and stable than the one that currently allows you to use Puppeteer/Playwright to take screenshots? I appreciate there's variability between running Lighthouse tests (even run off a local machine).

Regarding cost; is there a top-level of functionality from Lighthouse you might expose (kinda like how you're just allowing Chromium in Playwright right now)? I see options in the npm docs to limit certain checks, e.g.

--only-audits
--only-categories

coderkind avatar Apr 09 '21 01:04 coderkind

This means we have to first find an infrastructure solution that gives dependable, stable metrics without making it ridiculously expensive

@tnolet does the infrastructure need to be more dependable and stable than the one that currently allows you to use Puppeteer/Playwright to take screenshots? I appreciate there's variability between running Lighthouse tests (even run off a local machine).

Regarding cost; is there a top-level of functionality from Lighthouse you might expose (kinda like how you're just allowing Chromium in Playwright right now)? I see options in the npm docs to limit certain checks, e.g.

--only-audits
--only-categories

@coderkind those are great suggestions and we are considering all options. The workloads are pretty different though, because of the strong emphasis on performance vs. the strong emphasis on functionality we have right now.

tnolet avatar Apr 12 '21 09:04 tnolet

@dschmidtadv @coderkind @StanLindsey we are taking more and more steps in the direction of supporting performance metrics. I'm sure you will be already somewhat satisfied with some features we are rolling out soon, but I would love to get your thoughts on "next steps" and how we can do better here. Would it be cool if I contact you for this for a short chat?

tnolet avatar Sep 17 '21 18:09 tnolet

@tnolet do you have any updates on the status of this work? I can see on https://www.checklyhq.com/docs/browser-checks/tracing-web-vitals/ that, for example, TTI (time to interactive) is missing. Potentially, TBT lacks context without a TTI measurement also.

Additionally, CLS is listed as one of the 5 metrics offered, however is also listed under a section dictating what cannot be measured. Let me know if I missed something here.

ZainVirani avatar Feb 25 '22 20:02 ZainVirani

@ZainVirani

  1. we don't have the full Lighthouse tests available because we aren't currently set up to reliably and consistently get the full range of results. Lighthouse is very CPU and memory intensive and not recommended to run on the typical infrastructure we use right now.

  2. we only measure TBT — which is an indicator for TTI — because TTI will always require user interaction, something we cannot 100% rely on to be part of your scripts. This is the reason TTI is more useful in a RUM situation (like Vercel provides) where actual users are interacting with your page. For a synthetic solution like ours, TBT is the more dependable

  3. The section on CLS just addresses, that in some cases we cannot detect CLS. This just due to the nature of CLS being a measure over time. However, in the vast majority of the cases we can detect CLS. https://www.checklyhq.com/docs/browser-checks/tracing-web-vitals/#why-are-some-web-vitals-not-reported

Hope this helps!

tnolet avatar Mar 03 '22 12:03 tnolet