lighthouse icon indicating copy to clipboard operation
lighthouse copied to clipboard

Lighthouse Punishes Good Asset Preloading

Open iamakulov opened this issue 6 months ago • 16 comments

Imagine you have a page with fast real-world FCP and LCP. A page like this one:

Image

It doesn’t have any render-blocking resources. Its only contentful element above the fold is a text block (which uses system fonts). Its real-world loading speed metrics (FCP and LCP) are around 1.0-1.1s.

Now, run this page through Lighthouse:

Image

Suddenly:

  • the FCP of the page is yellow or red
  • the LCP of the page is much higher than FCP, even though the FCP and the LCP element is the same

Can you guess why?

Simulated Throttling

Lighthouse (and, therefore, PageSpeed Insights), by default, uses simulated throttling. The idea behind it is simple: instead of actually throttling the network while loading the page (which is slow), let’s load the page on a fast connection → look at the network graph and real FCP/LCP → simulate how the network graph would behave on a slow connection → derive slow FCP/LCP from that graph.

The challenge? That last step is far from perfect. For example:

  • To simulate slow LCP, Lighthouse looks at all requests that happened before real LCP – and assumes that LCP requires all of them:

    Image

    This causes a bunch of issues.

    • Preloaded a font? Well, now simulated LCP will be delayed by that font load time, even if the LCP element is actually an image.
    • Fetched a tiny, non-blocking script that just so happened to load before real LCP? Too bad, now that script will delay simulated LCP as well.
  • To simulate slow FCP, Lighthouse does the same thing. (The code path is literally the same!) The only difference is that for FCP, nodes with a low fetch priority are ignored. This means offscreen images and non-blocking scripts won’t delay FCP (yay!), but @font-face requests (meh) or modulepreloads (eh?) still will.

This Punishes (Good) Preloading

(and, in general, any kind of good early-loaded assets)

Framer is a no-code website platform that deeply cares about performance. At Framer sites, we try to help the browser load critical resources ASAP. To do this, we:

  • emit <link rel="modulepreload">s for all JS modules that the current page needs
  • inline the page CSS, including @font-faces, straight into the HTML

These optimizations directly improve real-world performance, making the site visible and interactive sooner. These optimizations also dramatically worsen the Lighthouse score.

Demo

Above, you saw a loading trace of a demo page (URL). For this trace, lighthouse -A computes the following FCP and LCP:

Image

Now, here’s the same page, modified to not have any <link rel="modulepreload"> elements (URL):

Image

We removed <link rel="modulepreload">s, so:

  • the real-world LCP is still ~same (200-300 ms) because <link rel="modulepreload">s don’t affect it
  • website hydrates much later (at ~2000 ms instead of ~500 ms), making the real user experience worse

However, lighthouse -A now simulates a much better LCP:

Image

This is Bad

This is bad because it creates bad incentives:

  • Developers are forced to pick between doing deep research (and convincing stakeholders that Lighthouse isn’t accurate) – or making the site slower for real users just to get the score higher
  • Companies (web platforms like Framer, agencies, etc.) are forced to pick between shipping fast sites – or retaining the business of customers who look at PageSpeed Insights scores

This is Serious

In Nov 2024, Google Amsterdam hosted WebPerfDays Unconference, an informal discussion event for Googlers, GDEs, and web perf specialists. At the event, the mismatch between Lighthouse and Core Web Vitals was (to my memory) one of the most-discussed points.

See also other people being struck by this issue: WebPerf Slack one, two, https://github.com/GoogleChrome/lighthouse/issues/11460

Solutions

There are easy solutions, and there are hard ones.

  • Easy solution 1: Fine-tune the simulation algo.

    • Ignore modulepreloads when computing FCP and LCP
    • Ignore fonts when computing FCP and LCP if 1) the FCP/LCP element is not text, or 2) the FCP/LCP element is text that uses a system font, or 3) @font-face uses font-display: swap [or similar]

    At least some of those changes would have to be upstreamed to @paulirish/trace_engine which is not on GitHub.

    This will reduce the punishment that Lighthouse gives to early requests.

  • Easy solution 2: For simulated FCP and LCP, pick min(simulated FCP value, simulated LCP value) when 1) their real-world values are the same, and 2) they were triggered by the same element.

    This will avoid an artificial mismatch between FCP and LCP when they are actually the same in the real world.

I would be happy to contribute these changes if you’re open to accepting them.

The harder solution would be to get smarter about figuring out what assets are actually render-blocking. We might have to look at 1) where a script is positioned in the document, 2) whether a script applies an anti-flicker effect, etc. This is harder and much less defined, but perhaps this issue could be a start of a discussion.

iamakulov avatar Jun 12 '25 08:06 iamakulov

Edit: Seems like lighthouse 12.7 fixed the issue 👀


Yes! We use angular with SSR (server side rendering) with incremental hydration and event replay. This means the customers see static DOM after 400ish milliseconds which is 100% visually complete and 90% functional (as most interactive elements are real hyperlinks that work without angular), and thanks to incremental hydration and event replay, angular will pick up the entire DOM and replay any event that happened beforehand for the 10% remainder. E.g. if the customer clicked the burger menu and angular bootstraps 1 second after, the burger menu will open right that second.

But due to the behavior you described, lighthouse treats our static html from SSR as it wasn't there and treats our page as if it was a blank page until all scripts finished loading.

Maybe it was just an oversight, as when you run lighthouse cli you see these two errors specifically about FCP and LCP

  LH:lh:computed:TimingSummary:error Error: FCP All Frames not implemented in lantern
  LH:lh:computed:TimingSummary:error     at FirstContentfulPaintAllFrames.computeSimulatedMetric (file:///opt/homebrew/lib/node_modules/lighthouse/core/computed/metrics/first-contentful-paint-all-frames.js:16:11)
  LH:lh:computed:TimingSummary:error     at FirstContentfulPaintAllFrames.compute_ (file:///opt/homebrew/lib/node_modules/lighthouse/core/computed/metrics/metric.js:90:21) +1ms
  LH:lh:computed:TimingSummary:error Error: LCP All Frames not implemented in lantern
  LH:lh:computed:TimingSummary:error     at LargestContentfulPaintAllFrames.computeSimulatedMetric (file:///opt/homebrew/lib/node_modules/lighthouse/core/computed/metrics/largest-contentful-paint-all-frames.js:21:11)
  LH:lh:computed:TimingSummary:error     at LargestContentfulPaintAllFrames.compute_ (file:///opt/homebrew/lib/node_modules/lighthouse/core/computed/metrics/metric.js:90:21) +0ms

from https://github.com/GoogleChrome/lighthouse/blob/main/core/computed/metrics/first-contentful-paint-all-frames.js#L14-L17 and https://github.com/GoogleChrome/lighthouse/blob/main/core/computed/metrics/largest-contentful-paint-all-frames.js#L20-L22

sod avatar Jun 24 '25 12:06 sod

Edit: Seems like lighthouse 12.7 fixed the issue 👀

Glad it did it for your issue! But I’m guessing that means it had a different root cause. What I described above still stands, and none of the algos responsible for simulating FCP/LCP have changed.


In the meantime, I talked to @tunetheweb (ty!!) and learned a bit more about contributing to the Lighthouse repo.

I want to reiterate: I’m happy to work on the proposed ideas (see “Solutions” ↑) myself, and I’m happy to dedicate a significant chunk of my work time (20%?) to this. This issue has been a huge pet peeve of mine, and I’d love to see Lighthouse thrive while also giving more accurate test scores 🖤

But with changes that big, the Lighthouse team will likely need to trial them for a while before rolling them out to everyone. So before I embark on this quest, I’d love to hear whether you folks are interested in experimenting with changes to the core FCP/LCP algos. (Because if you don’t have capacity to support that right now – which is valid! – then my efforts will be in vain!)

cc maybe @jackfranklin because we talked about this very issue on WebPerfDays Unconference in Amsterdam last year 🤞

iamakulov avatar Jul 11 '25 10:07 iamakulov

Hi Ivan! Really appreciate the deep investigation and thorough report! We're lucky to have such an invested community member and contributor.

You're right. And I really like your solutions.

I'd be happy to move ahead with your suggestions for tweaking the LCP simulation algo. Those adjustments are entirely reasonable. 👍

I also like your LCP clamp when the observed values are the same proposal. The same DOM node condition is a good, safe call. (Nice). Yeah we can definitely do that. 👍

We would definitely love that and appreciate your time. We can trial things out to make sure they don't raise any red flags, but.. as of right now, I feel pretty optimistic.

Below, some added background, context, and guidance…

Simulation and graphs tldr

For LCP (and our other metrics) we need two graphs. They graphs are built of nodes which represent either CPU tasks or Network requests, though.. it's mostly network. Why two? One is is a best-case scenario, the other in a worst-case. We then simulate() each graph and get an estimate (in ms) from that. We "blend" those two estimates, though in this case it ends up just being an average of the two. FWIW, I'm open to tweaking that blend if your future self is interested.

Related, there's https://github.com/GoogleChrome/lighthouse/issues/15737 which explored similar things and called out that our optimistic graph is including more network requests than it should. (Basically same as what you were saying.)

Code locations

FYI A year ago we moved lantern (the simulation engine) from the LH into the DevTools repo.

It sounds like you've already peeked at the code but.. yeah the relevant files are

@paulirish/trace_engine is built from the DevTools repo.

Quick hack-mode

There's a bit of complexity ahead so.. the easiest way to develop would probably be to check out the LH repo, get it build/running, and then just make edits to the files within node_modules/@paulirish/trace_engine .. and I can help land them.

Or.. below the full flow with DevTools changes -> trace_engine -> LH.

DevTools contribution.

https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/docs/contributing/README.md and https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/docs/get_the_code.md cover how to check out that repo. As its chromium, the contribution process doesn't use github, but gerrit and depot_tools and stuff. It's not terrible but.. it's definitely friction. :/

The trace_engine is built from a living branch of devtools, which lives here. Every few weeks we merge devtools' origin/main into it. (It's not ideal that it's a branch, but.. anyway). Per this you can run scripts/trace/prep-trace-engine-package.sh and it'll build the files for the NPM package. Then, you could npm link that new folder so that a local LH build can use it.

So in this world, you have a devtools branch like trace-engine-lib-plus-lcptweaks, make your edits, run the .sh, and then your local LH will use your updated changes. Once done, your CL to devtools would be from a only-lcptweaks kind of branch. Apologies for the verbosity and/or annoyingness here. :)


Also just sent you a message so we have you can easily msg me if/when the above throws you any curveballs.

Hope this helps, and thanks again!

paulirish avatar Jul 11 '25 20:07 paulirish

This is incredibly helpful, tysm @paulirish! Great to hear you’re interested in exploring those. I’ll start with this:

I also like your LCP clamp when the observed values are the same proposal. The same DOM node condition is a good, safe call. (Nice). Yeah we can definitely do that. 👍

as it seems to be simplest optimization. I’ll try to follow the full DevTools flow to reduce the friction for you folks, but I’ll reach out if it’s too complicated, and I need your help landing this.

Full disclosure: I’m on a holiday for the next two weeks, but it is my plan to pick this up as soon as I’m back.

iamakulov avatar Jul 11 '25 20:07 iamakulov

Hey @iamakulov thanks so much for your suggestions and pushing on this; feel free to ping me if you have any issues with the DevTools repository and working on it. Happy to have a quick meeting or video call to talk you through anything. The docs are pretty thorough so should help you out and none of it is Googler specific.

jackfranklin avatar Jul 23 '25 07:07 jackfranklin

Quick status update: I’m back from the holiday, and I’m (still) working on this:

I also like your LCP clamp when the observed values are the same proposal. The same DOM node condition is a good, safe call. (Nice). Yeah we can definitely do that. 👍

The actual code change is small, but I’m taking extra time to make sure it looks correct (by running it against ~200 sites and checking if score changes make sense). Sadly, I ran out of time this week, so I’m going to continue next week.


Important implementation note: We talked about using “same DOM node” as a condition for clamping. However, that ended up being a) pretty hard to implement, b) rather unnecessary (?). So, instead, I’ll be PRing a change where the algo is simply checking whether FCP and LCP happen within the same paint.

Why both hard to implement and unnecessary? If you look at how Chromium FCP/LCP detection works, you’ll see that:

  • for every paint, Chrome detects whether the paint is FCP and may have been an LCP
  • if the paint is FCP, Chrome adds the firstContentfulPaint trace mark
  • if the paint is an LCP candidate, Chrome determines what images or text rendered within this paint – and adds the largestContentfulPaint::Candidate trace mark

This algo has two consequences:

  1. FCP doesn’t have a clear DOM node associated with it. There’s no nodeId in the FCP trace, unlike with LCP, so we can’t clearly compare whether FCP and LCP got triggered by the same DOM node.

    Screenshots Image Image

    This makes clamping hard to implement. And we can ofc do the work to add nodeId for the FCP trace (I was ready to clone Chromium!), but this brings me to the second consequence:

  2. If we associate a nodeId with FCP, it would always be the same node as LCP. Look at the algo again: first, Chromium looks at each paint and determines whether it’s FCP/LCP. Then, it uses ImagePaintTimingDetector and TextPaintTimingDetector to determine what image or text rendered within that paint. And then, it associates that image or text with LCP.

    If we were to implement node detection for FCP, we’d use the same ImagePaintTimingDetector and TextPaintTimingDetector. And for the same paint, they would always return the same nodeIds.

    This makes comparing DOM nodes at the Lighthouse level unnecessary. If, within the same paint, the attributed nodeId is always the same, then we don’t need to compare paint IDs and node IDs. We can simply compare paint IDs only.

iamakulov avatar Aug 04 '25 09:08 iamakulov

So, instead, I’ll be PRing a change where the algo is simply checking whether FCP and LCP happen within the same paint.

Ah! good call. Yup everything that you said is accurate and we should be fine.

Though we don't have a 'paint id' and finding the parent paint event probably is not worth it.

The exact timestamp (ts) is passed around so if FCP & LCP are the same, luckily the ts is exactly the same. So we can use that. We could even say if they're within 5ms of eachother, we call it the same. (I don't see why they would be but.. easy enough)

paulirish avatar Aug 28 '25 17:08 paulirish

Quick update here: I’ve been trying to get back to this, but the 20% thing is clearly not working out for me (I’ve been getting pulled aside by internal priorities). So I’m going to dedicate a couple of proper weeks to this, ~starting next week or the week after~ UPDATE Oct 2: still wrapping up the work project that’s preceding this one, ETA soon. Thank you for your patience :)

iamakulov avatar Sep 12 '25 22:09 iamakulov

Alright, after wrapping up the project that took three months (= three months longer than expected :D), I’m finally free. So here comes the first PR! https://github.com/GoogleChrome/lighthouse/pull/16782

Would appreciate your review! I’ll kick off the second PR (“Ignore modulepreloads when computing FCP and LCP”), in the meantime :)

iamakulov avatar Nov 08 '25 00:11 iamakulov

2. If we associate a nodeId with FCP, it would always be the same node as LCP. Look at the algo again: first, Chromium looks at each paint and determines whether it’s FCP/LCP. Then, it uses ImagePaintTimingDetector and TextPaintTimingDetector to determine what image or text rendered within that paint. And then, it associates that image or text with LCP. If we were to implement node detection for FCP, we’d use the same ImagePaintTimingDetector and TextPaintTimingDetector. And for the same paint, they would always return the same nodeIds.

I feel like I'm getting tripped up on this point. Having the same timestamp seems necessary but not sufficient, because many nodes can have very different critical paths but still be presented at the same time (since all the paint timing trace events set their ts to the paint time).

For example, a page with text included inline in the html document as the typical FCP but an LCP image requiring a fetch. Depending on the machine, the connection, and the rest of the page (e.g. render-blocking resources, if the LCP image is identifiable by the preload scanner, etc), the LCP image might be ready to paint in the same frame as the text. On a slower device and connection, though, the LCP image might not be ready by the time the browser can make that first contentful paint.

Am I missing something, though?

I do agree it will be difficult to impossible to annotate the FCP trace event with the node that was painted since flagging contentful paints is done in so many places, and they all only record the timestamp for use in PaintTiming::MarkPaintTimingInternal because that's all that's ever been needed. There are some other layout and painting trace events with somewhat extensive node info recorded, but it would likely be a lot of work and still not enough to establish equivalence. The effort to only evaluate nodes for timing/tracing on frames means this aliasing might be a fundamental problem for simulated throttling regardless, and that improvements will have to come entirely from the simulation side.

brendankenny avatar Nov 11 '25 00:11 brendankenny

@brendankenny I responded in https://github.com/GoogleChrome/lighthouse/pull/16782#issuecomment-3521028754! (Moved it there because I’ll be posting other qs unrelated to that PR here now.)

iamakulov avatar Nov 12 '25 09:11 iamakulov

I’ll be posting other qs unrelated to that PR here now

speaking of which! :D

I’m currently investigating the next algo change: ignoring modulepreloads when computing FCP and LCP. modulepreloads are nasty because they have the fetch priority of High and resourceType of Script; this makes FCP treat them as render-blocking.

I noticed we already have some logic for regular preloads lying around, but it wasn’t extended when modulepreloads were added. Unfortunately, this flag is set inside Chromium internals, so I’ll need to drill modulepreloads all the way back from there, too.

Thus, could you sanity-check if this plan makes sense? 🙏

  • Drilling:
    • Add a new isLinkModulePreload field alongside isLinkPreload in the traces, and drill its value from preload_request.cc
    • Consume this value in devtools_frontend, and drill it around everywhere isLinkPreload is available
  • Actual algo changes:
    • In hasRenderBlockingPriority(), return false for preloads/modulepreloads
      • This feels safe-ish: those requests are definitely not render-blocking, although there’s probably some reason why they weren’t included into this check before?
    • In the optimistic LCP algo, exclude (module)preloads, or perhaps even all scripts if the image was discovered by the parser and not JS-inserted
      • This is a BIG change, so I’d love to make sure it passes the smell test. On the surface, it feels reasonable: on most sites, images that are parser-discovered should be visible without JS; and on the rare occasion that the image is delayed by a white curtain (A/B test or whatevs), the (unchanged) pessimistic graph should account for it

iamakulov avatar Nov 12 '25 10:11 iamakulov

I haven't been able to take a look in depth here, sorry, but I'm wondering how much work can be reduced by using the built-in renderBlocking information provided by Chrome now (#2065). @tunetheweb has been looking at Chrome's render blocking signal and preloads, so may have ideas on how suitable it is?

@connorjclark @paulirish looks like renderBlocking hasn't been plumbed through to NetworkRequest (and that might be difficult if it's only coming through in traces events, not the protocol), but it is in the lantern graph at networkNode.rawRequest.args.data.renderBlocking, so is presumably ok to use in the lantern metrics?

brendankenny avatar Nov 20 '25 17:11 brendankenny

@tunetheweb has been looking at Chrome's render blocking signal and preloads, so may have ideas on how suitable it is?

I just fixed a bug where something like this:

<link rel=preload href=styles.css as=style>
<link rel=stylesheet href=styles.css>

Would not upgrade the styles.css to render blocking (as the initial request as a preload is not render-blocking). As of Chrome 144 this will correctly be marked as render-blocking.

@jackfranklin made a similar fix to the trace engine.

We should be careful not to introduce the same bug here, which it sounds like you might be with this change:

In hasRenderBlockingPriority(), return false for preloads/modulepreloads

tunetheweb avatar Nov 20 '25 18:11 tunetheweb

@jackfranklin made a similar fix to the trace engine.

ah nice, then networkNode.rawRequest will get that fix, since it's the SyntheticNetworkRequest trace event

brendankenny avatar Nov 20 '25 18:11 brendankenny

how much work can be reduced by using the built-in renderBlocking information provided by Chrome now

Oh this is great, thank you! <3 Yea let’s use that.

I’m out this week, so I’ll send the PR hopefully next week. Here’s the updated plan (could you sanity-check the second point when you have a moment? 🙏):

  • In hasRenderBlockingPriority(), return false ~for preloads/modulepreloads~ for all requests with renderBlocking: false

    • This feels safe-ish: those requests are definitely not render-blocking, although there’s probably some reason why they weren’t included into this check before?
  • In the optimistic LCP algo, exclude ~(module)preloads, or perhaps even all scripts~ all requests with renderBlocking: false if the image was discovered by the parser and not JS-inserted

    • This is a BIG change, so I’d love to make sure it passes the smell test. On the surface, it feels reasonable: on most sites, images that are parser-discovered should be visible without JS; and on the rare occasion that the image is delayed by a white curtain (A/B test or whatevs), the (unchanged) pessimistic graph should account for it

iamakulov avatar Nov 28 '25 01:11 iamakulov

ah nice, then networkNode.rawRequest will get that fix, since it's the SyntheticNetworkRequest trace event

Qq: how good/how close to production-ready is INTERNAL_LANTERN_USE_TRACE (https://github.com/GoogleChrome/lighthouse/pull/16026)?

I’m noticing that unlike some other audits, FCP/LCP audits build their network graphs from the Chrome DevTools Protocol log, not from the trace. Unlike the trace, the CDP log does not include the renderBlocking attribute. We can ofc drill it in the protocol, but the trace already exposes it, and we’ll need to re-implement the same fixes Lighthouse did earlier, so I’m not sure this is wise.

Flipping the flag to true will allow us to use renderBlocking with much less effort.

iamakulov avatar Dec 15 '25 11:12 iamakulov

In hasRenderBlockingPriority(), return false for preloads/modulepreloads for all requests with renderBlocking: false

SGTM

As you saw with INTERNAL_LANTERN_USE_TRACE, this won't yet impact the metric graphs created by Lighthouse – mostly just improved results in the Performance panel insights. I filed #16805 to track the work to make the switch to the trace in Lighthouse's Lantern usages. It's an unknown amount of work - last time I tried to chip away at it, it was pretty slow progress. I think it's close to being an accurate replacement, but the number of differences in tests (even if minor) makes it a hard call to make.

In the optimistic LCP algo, exclude (module)preloads, or perhaps even all scripts all requests with renderBlocking: false if the image was discovered by the parser and not JS-inserted

One thing I haven't seen mentioned during this discussion is the issue of bandwidth. One reason that the LCP graph included so much was to account for that. For an extreme example, if dozens of non-critical requests are preloaded, that would saturate the network and prevent the actual LCP image request from being delivered as quickly as when there were no preloaded requests. By removing those requests from the LCP graph, we lose that signal and begin to under report the value for a given throttling setting. I suspect this would be closer to the real value than what we're doing today, but it's something we should consider.


An extreme alternative solution would be to drop simulated throttling, and use applied throttling. We've long opted not to do that for the reason that developers want / deserve fast results. But it's always been at the cost of accuracy and trust in the results. I wonder if, given the age of AI we find ourselves in (which has proven developers don't mind waiting longer / spending more compute for accurate results), perhaps we should drop simulated throttling. Or at least, change the defaults to use applied/devtools throttling.

Do you have an opinion on that approach?

Although, that might be totally untenable for PSI (wrt cost). cc @paulirish

connorjclark avatar Dec 15 '25 22:12 connorjclark

For an extreme example, if dozens of non-critical requests are preloaded, that would saturate the network and prevent the actual LCP image request from being delivered as quickly as when there were no preloaded requests.

Good one. I guess this is especially the case with PSI which probably has really high bandwidth.

I wonder if maybe maybe maybe this is exactly what optimistic and pessimistic graphs are made for? My thinking is: with an optimistic graph, we can assume we’re on a “happy” path: the site is like 80% of other sites and doesn’t do any weird tricks. In the pessimistic graph, we can account for the remaining 20% of cases: sites with white curtains, sites that preload too much and saturate the bandwidth, etc.

Today, the optimistic graph for LCP is a slightly less pessimistic version of the pessimistic graph. If we make the optimistic graph assume the site is the typical “80%” site – one where only render-blocking nodes block rendering – we’ll make the score closer to reality while still accounting for weirdness via the other graph.

(Happy to do some work to gather some data here if you think we need it, lmk.)


Do you have an opinion on that approach?

Interesting! I don’t think I can make a call on removing simulated throttling. (Although ofc I’m always down for better accuracy.)

My first thought is: could this be a UX problem rather than a technical one? We have a less-precise-but-fast algorithm, and a more-precise-but-slow one. We chose to optimize for speed of getting results, but most people aren’t even aware of the tradeoff.

I could imagine some alternate UXes that are more explicit about this tradeoff:

  • Show the “fast” score, then replace with the “slow” score. For example, in PSI, we could do something like this. In DevTools, instead of showing a loader until the test is complete, we can show the simulated results first – and then run an applied throttling test right after.

  • In DevTools, put a “☑︎ Faster but less accurate test” checkbox. (And uncheck it by default?) Pretty much nobody outside of the web perf community knows what this ↓ select means, and the checkbox will make the tradeoff much more explicit.

    Image

iamakulov avatar Dec 15 '25 23:12 iamakulov

In hasRenderBlockingPriority(), return false all requests with `renderBlocking: false

In the meantime, I prepared a Chromium change for this one: https://chromium-review.googlesource.com/c/devtools/devtools-frontend/+/7262577

I’ll take a look at https://github.com/GoogleChrome/lighthouse/issues/16805 tomorrow (🤞), to see if I can resolve it in a day. Waiting for your feedback to proceed further with https://github.com/GoogleChrome/lighthouse/pull/16782 or the optimistic graph change we discussed above.

iamakulov avatar Dec 16 '25 00:12 iamakulov