Source state is AL_STOPPED before reverb echo finishes playing
Hey Chris,
I recently noticed that when adding reverb to a source via an auxiliary slot (either with EAX or normal EFX reverb), calling alGetSourcei(alSourceID, AL_SOURCE_STATE) will return AL_STOPPED before the reverb echo finishes playing.
Is this because once the source itself finishes playing, the auxiliary slot will continue playing the reverb, meaning it's safe to dispose the source?
Is this because once the source itself finishes playing, the auxiliary slot will continue playing the reverb, meaning it's safe to dispose the source?
Yes. When the source becomes stopped, that means it finished mixing to its output buffers (direct output, and any auxiliary slots its connected to). There's nothing more for the source to do. The auxiliary slot may (as it does with reverb) have additional delays in its output, but this is separate from the source. When the source stops, its no longer needed for the auxiliary slot to continue processing it.
Great, thank you!
As a follow up question, I'm working on a 3D audio system that uses raycasting to determine which reverb properties and low-pass filters that should be applied to each source. I can reuse an auxiliary slot for multiple sources if they each have similar reverb properties, however after implementing reverb pan and reflection pan I'm limited to one auxiliary slot per source.
Is it possible to reuse one auxiliary slot for multiple sources - which are each placed in different positions around the player - and then specify a reverb pan 'override' for the source+auxiliary slot pair? I know of AL_STEREO_ANGLES but I believe this will affect the source sound itself, not just the reverb pan.
Generally you model environments around the listener rather than around each source, since the reflections you hear will be relative to the listener instead of the sound source. Environments would be split up according to how different two given regions are (a big room would be a single environment, while a room with an alcove would have one environment for the room and another for the alcove). You can prioritize which environments to use given where the sources are, but the environments themselves should be handled relative to the listener instead of the source.
For instance, if you're in a regular-shaped room and you have a source to your left, the reflections for that sound will reach the listener from all the walls (which are all around you) rather than just the left area near the sound. If the room then has an open door to a different room (say, a carpeted room connected to a non-carpeted hallway), that other room would have its own environment properties and be pointing in the direction of the doorway, since that's where the listener would hear that environment from. Some of the energy for that sound will reach the other room and reflect back through the doorway, so you'd hear reverb of the sound from the room you're in all around you, as well as reverb from the other room in the direction of the doorway. As you move around, the panning for the other room would change according to the doorway's relative location, so the current room reverb of the sound will continue to come from the walls around you and the other room's reverb of the sound will move with the doorway. If the sound moves, it would change how much it feeds each environment given how much of its energy reaches each environment (consequently, if the sound moves into that other room, the majority of its reflected energy will then be heard in the other room's reverb through the doorway, instead of the listener's room), while the environments themselves stay as they are if the listener stays still.
Similarly, if you're dynamically updating the environment the listener is in, you would pan the early reflections toward the nearest walls of the environment, since all sounds in the environment would have their first reflections come from those walls as they're the closest to the listener (obviously true physically modeled reflections would be more complex and nuanced, but this is designed for practical real-time processing), and the late reverb reflections panned toward the center of the environment, since that's where the majority of reflections come from after they've been bouncing around the environment a few times. The panning vector length is used to dictate how constrained the reflections are (at 1, all reflections are together in that direction, while at 0 they're spread all around). So the sound sources would stay as they are, but the listener would hear their first reflections toward the nearest wall.
It's not really practical to give each source a different reverb panning. Technically, the sources do start out positioned with reverb processing, but reverb uses multiple spatialized feedback lines and with enough diffusion it quickly spreads out all over, then the effect panning itself is done at the end. All sources connected to the auxiliary slot are mixed into the effect input buffer for that slot, then the effect processes that mix. It can't separate the individual sources out again to pan them each differently. For each source to be panned independently in the reverb output, they'd each need to be processed independently, basically giving each source its own effect for each environment the listener can hear it in.