Land experimental responsiveness metric in user flows
The new responsiveness metric has mostly settled and so we want to include support for it in situations where there can be (real or automated) user input.
The steps foreseen:
- [x] add the high-percentile responsiveness metric for timespans - #13917
- [x] add a metric N/A state (that may not be
notApplicable) in the report perf category renderer for when there was no user input in the trace - #13981 - [x] add an
interactionsaudit for debugging responsiveness, e.g. here are the keypresses with the slowest response, here's the breakdown of what was blocking the response (time in script, layout, paint, etc) - #13982 - [x] Add
relevantAudits? Would be similar to TBT's relevantAudits - #13982 - [ ] add to navigations if interactions present in the trace, but as a diagnostic audit, not a metric
- [ ] timespans don't yet support simulated throttling, but navigations do. Investigate if simple CPU multipliers are enough to get simulated numbers or if more sophisticated lantern simulations are needed
- [ ]
yarn update:flow-sample-json --rebaseline-artifacts Traceto update the sample flow trace for the new EventTiming trace events (see https://github.com/GoogleChrome/lighthouse/pull/13979#discussion_r868407873)
add a metric N/A state (that's not notApplicable) in the report perf category renderer for when there was no user input in the trace
What's wrong with using notApplicable? Seems like the perfect state for the audit to be in if there was no input.
I am extremely hyped about it! Just today I mentioned it to @adamraine.
What you should definitely consider is adding this metric to berformance budgets.
Would that be possible?
Would that be possible?
It should be possible as part of #13898
Bright future! 🌞
add a metric N/A state (that's not notApplicable) in the report perf category renderer for when there was no user input in the trace
What's wrong with using
notApplicable? Seems like the perfect state for the audit to be in if there was no input.
Yeah, I overstated the decision there. Semantically there's definitely an argument there; the biggest drawback is it closes the door to using n/a for what we do for audits everywhere else in the report (remove it from the report), which is yet another special case for metrics/perf category. It still might be the best choice vs signaling within metric details or whatever.
the biggest drawback is it closes the door to using n/a for what we do for audits everywhere else in the report (remove it from the report)
We don't remove metrics if they are N/A though. This is what a N/A metric looks like right now:
It seems like we are missing a proper N/A state for metrics, and this would be a good opportunity to fill that void.
We don't remove metrics if they are N/A though
yes, but only because there's never been a notApplicable metric before and the display code is valueEl.textContent = audit.result.displayValue || ''. There was not a plan here :)
I think the argument is convincing, I just want to make sure to explore the options because it does remove basically the only way we have for an audit to remove itself from the report. If that's something we ever want to do for metrics, we'll either have to change this again (which may not be possible) or add yet another mechanism to do so.