obs-studio
obs-studio copied to clipboard
obs-webrtc: Adjust RtcpNackResponder from bitrate
Description
Adjust RtcpNackResponder from bitrate
Motivation and Context
At higher bitrates the cache would be undersized. Because of the undersized cache we wouldn't have enough NACK history to fix the stream.
How Has This Been Tested?
Against Broadcast Box with a packet loss percentage of 1%
Types of changes
- Bug fix (non-breaking change which fixes an issue)
Checklist:
- [x] My code has been run through clang-format.
- [x] I have read the contributing document.
- [x] My code is not on the master branch.
- [x] The code has been tested.
- [x] All commit messages are properly formatted and commits squashed where appropriate.
- [x] I have included updates to all appropriate documentation.
Thanks for the review @RytoEX I made the changes
@tt2468 @Bleuzen Do you think it would be better to set a 'sensible' higher default then?
In libdatachannel the default is 512. For 8mbit it is ~1200 packets a second, I was hoping to give users ~5 seconds of cache to support people with mixed work loads.
- People viewing in real-time get some packet loss
- The server is also saving the video and will aggressively NACK old video to make sure saved video is perfect
Do you think it would be better to set a 'sensible' higher default then?
Not sure how this translates to resource usage (memory / bandwidth?) and if everyone would be happy with it so I don't have a strong opinion on that currently
However I wonder if it is better to make this user configurable
The extra memory usage will be the 5 seconds of video. It shouldn't impact bandwidth, the NACKs are only requested on loss.
I don't have a strong opinion either. I just realized when debugging lossy networks our current NACK setup isn't useful.
If we're just trying to determine a buffer size which has to be available but not always filled, then yeah we can probably go pretty high like 8MB without too much worry. If the behavior of the output changes depending on whether the buffer is the required size versus if it's oversized, then we might have to be more careful.
@tt2468 Yep as you described, available but not always filled.
It would be really nice to bump this especially for users with higher bitrates and packet loss, the current value is only giving them ~.5 second of buffer
Wouldn't it make more sense to adjust the buffer size based on (configured) output bitrate? Currently OBS outputs are guaranteed to target CBR, so you could query the bitrate and set it appropiately. See the RTMP Windows send loop for example: https://github.com/obsproject/obs-studio/blob/master/plugins/obs-outputs/rtmp-stream.c#L1046-L1067
@derrod either works for me! The original worked that way, @tt2468 and @Bleuzen lefts comments around the following
This only works for CBR and similar modes. Is there any consideration made for if a user uses an encoder
in CQP or similar modes? Additionally, I could see it being an issue if an encoder has a "bitrate" value
applied to its settings object, but the encoder is not using a mode which actually references the "bitrate" value.
So it felt safer to have a high default value.
Currently OBS outputs are guaranteed to target CBR
@derrod How exactly does this work? The UI does not enforce CBR
I found that for webrtc using VBR works better anyway because most encoders do introduce big spikes in the data rate on I-Frames especially in low motion scenes These spikes do cause freezes very often VBR does lower the bitrate in low motion scenes which does also make these spikes smaller So VBR streams freeze less often and are more stable than CBR streams Have been using VBR for webrtc streaming over the last months mainly for this reason (also it makes mobile viewers with limited data plans happier as a nice bonus)
As an idea we could use the maximum-bitrate (if configured, depending on encoder) for the calculation in VBR mode Or make this value user configurable so it also works in case one uses other rate control modes or just has more memory to spare in their system
I think this could be fixed in two stages. One just unblocks general usage of this feature with an appropriate sized buffer (say 4000 packets or 6 megabytes). This is a negligible amount and could be optimized later with dynamic behavior based on bitrate or configuration
To give people confidence that I have fixed this properly I wrote https://github.com/sean-der/whip-nack-test This is a WHIP server that simulates packet loss. You can start with go run .
and then set your WHIP server to http://localhost:8080/
and use any key.
Without packet loss OBS will stop responding to packets at 550
. This means that at most 750 kilobytes of history is available for video.
Sending NACK for Packet (15726) which is (550) in the past
With my change you now will see it go past that point now
Pkt with SequenceNumber(35500) correctly RTXed
Sending NACK for Packet (36358) which is (900) in the past
Giving us 6 Megabytes of history, giving a much better experience in broken networks.