Sample latency variance
When a channel is added to a mixer stream, its first byte gets processed the next time data is retrieved from the mixer. This effectively causes the start of a channel to snap to the start of the next period, which introduces 0-1 periods of variance to the latency. E.g. when the audio engine uses a 10ms periodicity, then it takes anywhere between 0-10ms (depending on cycle alignment) until the channel's data begins to flow towards the output.
For tracks this isn't a problem, because the audio clock is based on the number of bytes that have been processed, so any initial delay affecting the track will by extension also affect the clock, and thus everything that's timed relative to that clock. shifting everything is the same as shifting nothing, so this is fine.
For samples however this means ±5ms uncertainty relative to an ongoing track, which is a problem. For instance, fully keysounded beatmaps tend to rely on seamless transitions between adjacent samples, so that any discontinuity between them reflects the temporal imprecision in the player's inputs. But right now, even auto-play has audible gaps/overlaps, unless the first sample's duration happens to be a perfect integer multiple of the audio update interval.
I still need to check if BASS has a mechanism for scheduling samples with an offset for intra-period precision, but if not, then a possible workaround would be to prepend samples with 1 period worth of silence, and then skip 1 period - time since last update of that silence when registering the sample for playback. Alternatively we could try to configure a stupidly fast update rate in order to reduce the variance to irrelevance
As far as I know, every bass update you run will recalculate the buffer, so if we're calling it every audio thread frame, that should reduce this down to 1 ms I think?
i will test this today after work and report back
I have digged into this now. Unfortunately it doesn't work, for two reasons:
- The current configuration uses auto-updating, which causes all manual updates to fail. The auto-update interval is configurable via
BASS_CONFIG_UPDATEPERIOD, but it clamps the value to 5-100ms, which isn't fast enough for our purposes. - Even if that limit didn't exist (we could bypass it by disabling auto-updating and doing all updates ourselves), it wouldn't even matter, because "updating" only applies to BASS-level buffers, which we are skipping entirely (via
StreamSystem.NoBuffer). Population of the device buffer (which on windows is actually the WASAPI buffer shared with the sound server) is subject toBASS_CONFIG_DEV_PERIODinstead, which in turn is limited to device capabilities.
https://github.com/ppy/osu-framework/blob/92f210dd6fb3265528e7621c7673b7996a866863/osu.Framework/Audio/AudioManager.cs#L382-L388
The 1st and 3rd option here^ have no effect because of that. The 2nd option is functional, but limited (at least on windows) to 30ms for (somewhat unclear) reasons detailed in the ASIO/WASAPI thread back-linked above.
There are several options now:
1. BASS-level buffers + manual updates
This would heavily increase latency, so I wouldn't recommend it
2. Prepend silence to samples, then skip some of it based on cycle alignment
The workaround mentioned in the OP. I have no idea what the implications would be, but it feels hacky
3. Access the underlying APIs directly
Minimal example in #6649
This would give us granular control over how the "device" buffer is populated, and tie together well with adding support for specialized APIs (exclusive mode etc). I am currently investigating the scope and feasibility of this