lighthouse
lighthouse copied to clipboard
Lighthouse flagging HLS video chunks as text content
FAQ
- [X] Yes, my issue is not about variability or throttling.
- [X] Yes, my issue is not about a specific accessibility audit (file with axe-core instead).
URL
https://hls-js.netlify.app/demo/?src=https://hls.enniscdn.net/ded3f0fb-0721-49bb-b761-2123d7be9ed7/master.m3u8&demoConfig=eyJlbmFibGVTdHJlYW1pbmciOnRydWUsImF1dG9SZWNvdmVyRXJyb3IiOnRydWUsInN0b3BPblN0YWxsIjpmYWxzZSwiZHVtcGZNUDQiOmZhbHNlLCJsZXZlbENhcHBpbmciOi0xLCJsaW1pdE1ldHJpY3MiOi0xfQ==
What happened?
When running Lighthouse on a page with an embedded HLS video, the video chunks are incorrectly flagged as "text content". This seems false, especially as the chunks come back with Content-Type: application/octet-stream.
They're also flagged as "enormous payloads". Are there any docs pointing to how best to chunk up video for playback? HLS/DASH seems best practise at the moment.
What did you expect?
For Lighthouse not to interpret video chunks as text content.
What have you tried?
No response
How were you running Lighthouse?
Chrome DevTools
Lighthouse Version
9.6.1
Chrome Version
Chromium: 103.0.5060.114 (arm64)
Node Version
No response
OS
macOS
Relevant log output
No response
Would need to add application/octet-stream here:
https://github.com/GoogleChrome/lighthouse/blob/c31a92a2224b8d491f22a95278f1b7a4f87acdc3/core/gather/gatherers/dobetterweb/response-compression.js#L30
Although, we may want to rethink this audit. Why only text? Sure, many binary files are already compressed, but the one in the provided example clearly isn't - the page could be serving 25% of the bytes it currently is for these resources.
Maybe the ResponseGatherer should only skip the expensive "will it gzip" check for formats that are known to always be compressed, and this audit should be renamed to not single out only text.
BTW, it seems like these files should be served with the content-type video/mp2t. In discussing how this audit should behave for binary formats, we think it should ignore image and video resources because those should instead be compressed using domain/filetype-specific methods, not transport-level compression. Given application/octet-stream is super generic, I think we want to suggest a transport-level compression for it.
Did you mean to serve what seems to be entirely uncompressed video chunks? Wondering if there is a reason to do that, or if it is just a server/video export misconfiguration.
some TODOs:
- look into reframing the "text compression audit" and purposefully include all resources that don't have content-specific compression available
- look into video compression, how could we have a video audit analogous to our
optimized-imagesaudit for video - look into an audit for "Hmmm... looks like content type doesn't match the URL file extension..."
@jonlambert You're right that Lighthouse is saying these are text-based resources and they are not.
Really, the assets here fell into a super edge case because its lossless uncompressed video assets. And turns out gzip would actually provide a win.
We'll consider reframing how we do this, but... it would only really apply for this sort of test asset.
Thanks for your replies @connorjclark & @paulirish.
Yep agreed – this bug was mainly meant to target the mis-classification of the content as text. We'll definitely be adding gzip compression to appease the warning.
However – whilst the chunks are missing gzip compression, it's not lossless uncompressed video. It's MP4 files with 3 different levels of h264 compression applied, chunked into different MPEGTS files, per the HLS spec. This is a specific recreation of the issue occurring on multiple of our sites in the wild.
Output of ffprobe inspecting one of the chunks (data00.ts):
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 90k tbn
Stream #0:1[0x101](und): Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 69 kb/s
Given this is a pretty common standard, how do I avoid the "enourmous payload" warning and score penalty?
Is is standard to encode video files like this that are meant to be served over the web? We don't have experience here with video specific web performance tuning, so we'll have to investigate, but at a first glance it seems that the level of compression applied is just too low to be a reasonable thing to serve over a web connection.
how do I avoid the "enourmous payload" warning and score penalty?
FYI, the only thing that impacts scores is the actual metrics. This is just an educated guess on how to move that needle. We could delete the audit entirely and the score wouldn't change.