oboe
oboe copied to clipboard
StabilizedCallback should wait for stream callbacks to settle before taking timings
When you first start an audio stream there can be significant jitter on the callback period. For example, consider a stream which, when running optimally, has a 4ms callback period:
| Callback # | Optimal time | Actual time | Early/late |
|---|---|---|---|
| 1 | 0ms | 0ms | timing model starts |
| 2 | 4ms | 6ms | 2ms late |
| 3 | 8ms | 9ms | 1ms late |
| 4 | 12ms | 13ms | 1ms late |
| 5 | 16ms | 17ms | 1ms late |
The problem with this is that as the stream jitter decreases (in callbacks 3,4,5), the stabilized callback doesn't adjust its timing model, and simply assumes that each callback is running late. This negates its effectiveness since it won't generate the correct amount of load.
A solution to this would be to wait until the stream has stabilized before starting the timing model, say waiting for 100 callbacks.
The decision on how much artificial load to add should be based solely on the execution time of the application. It should not be based on the start time of the callback because that is very erratic, particularly when doing sample rate conversion or when setFramesPerDataCallback().
The decision on how much artificial load to add should be based solely on the execution time of the application. It should not be based on the start time of the callback because that is very erratic
This doesn't work in practice precisely because the start time of the callback is erratic.
Say you have an application which has a constant execution time, and based on that time we generate a constant amount of artificial load inside each callback with the total execution time being 80% of the callback time (calculated by numFrames / sampleRate). If the callback starts late, the artificial load may push the total execution time over the callback deadline.
By calculating how late the callback starts based on a timing model we can reduce our artificial load time to avoid missing the deadline.
particularly when doing sample rate conversion or when setFramesPerDataCallback().
I'd like to understand how these scenarios affect the callback timings. My guess is it shouldn't matter as long as the following statements hold true:
- The audio data generated inside the callback is actually being played, not discarded. If it was discarded this breaks the timing model which assumes all data generated is actually played (
mFrameCountis used to keep track of that, code). - A callback cannot occur until the previous callback has finished.
StabilizedCallback is no longer used in samples. Marking as closed