Samsung Tizen - Performance degradation when initiating long streams
What version of Hls.js are you using?
1.5.20
What browser (including version) are you using?
Chromium v56/63 (Tizen fork)
What OS (including version) are you using?
Tizen
Test stream
No response
Configuration
{
backBufferLength: 30,
enableWorker: true,
highBufferWatchdogPeriod: 5,
liveSyncDurationCount: 5,
maxBufferLength: 30,
maxMaxBufferLength: 60,
nudgeMaxRetry: 10,
workerPath: 'path/to/hls.worker.js'
}
Additional player setup steps
NOTE: I originally posted this issue on the video-dev/hlsjs Slack channel. I am copying a summary here for posterity, in the event others come across this problem.
Original Slack message (edited for clarity)
We maintain an application that runs on various devices, including Samsung Tizen TV's and other Smart TV's/STB's. After recently updating the HLS.js player from 1.1.5 to 1.5.x, I'm noticing a very long stream startup delay on 2018/19 devices (which use Chrome v56/63 respectively) as a result of the application being entirely locked up during this time, indicating an apparent cpu bottleneck.
The issue seems to be exacerbated when playing longer streams (~3-4 hours).
I've been able to narrow this issue down to the player getting bottlenecked when:
- Parsing a variant playlist and subtitle (VTT) manifest concurrently, or
- Parsing a variant playlist, then immediately ramping to a different variant and parsing a new playlist
It seems that the m3u8-parser is blocking the main thread whenever the player needs to parse multiple variants in sequence/concurrently.
Thread summary
- The issue occurs on all versions from
1.4.0and higher. - I've been able to pinpoint the bottleneck to the
Fragmentobject creation in the level parsing block. The loop continues to execute, but becomes progressively slower during each iteration. This behavior, combined with all of the conditions seems to indicate a possible V8 optimization problem on these older devices.
Findings/solution
This issue occurs due to a combination of how Vite is bundling our client application, and older versions of the V8 engine having sub-optimal memory management and function de-optimizations when using the ESM version of this library. The solution in this case was to alias the UMD/ES5 version of the library in our Vite config:
resolve: {
alias: {
'hls.js': 'node_modules/hls.js/dist/hls.js'
}
}
Checklist
- [x] The issue observed is not already reported by searching on Github under https://github.com/video-dev/hls.js/issues
- [x] The issue occurs in the stable client (latest release) on https://hlsjs.video-dev.org/demo and not just on my page
- [x] The issue occurs in the latest client (main branch) on https://hlsjs-dev.video-dev.org/demo and not just on my page
- [x] The stream has correct Access-Control-Allow-Origin headers (CORS)
- [x] There are no network errors such as 404s in the browser console when trying to play the stream
Steps to reproduce
- Start playing a stream with a length of over ~3 hours. Streams that have sidecar subtitle files (i.e. VTT) are more likely to exhibit this issue.
Expected behaviour
Stream starts and plays within ~10 seconds
What actually happened?
Observe startup delay along with an unresponsive UI, indicating a possible thread starvation. Custom log output shows that the fragment creation latency gets progressively worse over time (note the subtitle loop takes ~48 seconds to complete)
Console output
n/a
Chrome media internals output
n/a
Hi @nebutch,
We should include migration notes for 1.4 and up that with the introduction of the ESM library, you may need to pin your import to the ES5 optimized UMD export OR use "loose" presets in your own builds ES5 transpiler settings to achieve similar performance to previous versions.
ba7b067 is an experiment that replaces the Fragment, Part, and BaseSegment classes with ES5 objects. This is a draft that could be optimized further by inlining the object property definitions. It also destroys type checking with getters and setters. Shortcomings aside, I'd be curious to know if this improves performance in your use case.
Hey @robwalch
I've been messing around with changing our build system to compile the ESM version of the library in loose mode. Unfortunately it's easier said than done when using Vite/Vite Legacy Plugin for transpiling without causing other problems (I suspect there could be issues with attempting to add Babel presets in the Rollup options, which may be conflicting with the Vite legacy plugin). I'm going to find time to dig a little deeper, but I'm pretty confident this issue could be mitigated by switching over to Babel as you mentioned.
I'll check out your commit and run a test (though it might be next week before I can get to it). I do also have a fork with an update to the MIGRATING.md file with a note about this - I'll submit a PR for it soon.
Thanks
I had an itch to try out the change you pushed up yesterday (https://github.com/video-dev/hls.js/commit/ba7b067c24946c2815d05693050adb274efd18bd)
Anecdotally it seems to be helping quite well so far. Video starts up quickly with no apparent bottlenecking (or a very negligible amount, though IMO there's bound to be a little bit on these older devices). I can do some more in-depth testing next week.
Hi. I am using version 1.5.20 and building with webpack 5. I had the same issue and changed to load hls.js script file from ESM mode to UMD.
Even when using shared UMD in SmartTV low browser engine, performance degradation still occurs when playing long live streams. If I do not use hls.js and directly execute hls URL in video element, there is no performance issue.
I wonder if there are any additional issues to improve.
{
debug: false,
lowLatencyMode: false,
startFragPrefetch: false,
liveDurationInfinity: true,
liveSyncDurationCount : 3,
backBufferLength: 10,
maxBufferLength: 30,
maxMaxBufferLength: 60,
maxBufferHole: 0.5,
highBufferWatchdogPeriod: 3,
manifestLoadPolicy: {
default: {
maxTimeToFirstByteMs: 30000,
maxLoadTimeMs: 30000,
timeoutRetry: {
maxNumRetry: 5,
retryDelayMs: 2000,
maxRetryDelayMs: 64000,
},
errorRetry: {
maxNumRetry: 5,
retryDelayMs: 2000,
maxRetryDelayMs: 64000,
},
},
},
playlistLoadPolicy: {
default: {
maxLoadTimeMs: 15000,
maxTimeToFirstByteMs: 15000,
timeoutRetry: {
maxNumRetry: 6,
retryDelayMs: 1000,
maxRetryDelayMs: 64000,
},
errorRetry: {
maxNumRetry: 6,
retryDelayMs: 1000,
maxRetryDelayMs: 64000,
},
},
},
fragLoadPolicy: {
default: {
maxLoadTimeMs: 20000,
maxTimeToFirstByteMs: 20000,
timeoutRetry: {
maxNumRetry: 6,
retryDelayMs: 1000,
maxRetryDelayMs: 64000,
},
errorRetry: {
maxNumRetry: 4,
retryDelayMs: 1000,
maxRetryDelayMs: 64000,
},
},
}
}