openal-soft
openal-soft copied to clipboard
Wasapi audio backend
Any plan to add wasapi for UWP backend support?
As far as I understand WASAPI is pretty similar to MMDevAPI. Although there doesn't seem to be much clear information about what the differences are, particularly for C code.
There are these samples, if really any https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/AudioCreation https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/WindowsAudioSession https://blogs.windows.com/buildingapps/2014/05/15/real-time-audio-in-windows-store-and-windows-phone-apps/
The first link seems to be C# code, which doesn't help with OpenAL Soft in C. The second one looks like some twisted non-standard C++, which won't compile. The third link also seems to be using the same non-conformant C++, with no explanation of how to use WASAPI it in real C++ or C.
I remember seeing things like this before, which is what confuses me about what WASAPI actually is. It utilizes similar classes like IAudioClient
/IAudioClient2
, but none of the code samples I've run across make any sense.
I see. Well, I just like to scavange docs. Btw, what to you think about object-based audio like dolby atmos? Isn't that like a "window to the outside world" for everything openAL has been doing internally til now?
Btw, what to you think about object-based audio like dolby atmos? Isn't that like a "window to the outside world" for everything openAL has been doing internally til now?
Looks like it. Seems it supports a set of "static objects" for a channel bed of 7.1.4 channel (normal 7.1 + 4-channel upper quad), 7.1.4.4 channel (7.1.4 + lower 4-channel quad), and 8.1.4.4 channel (8-channel hexagon + LFE + upper 4-channel quad + lower 4-channel quad) output, which is like an OpenAL source playing a multi-channel buffer, and also provides a number of dynamic objects which can be positioned freely, like an OpenAL source playing a mono buffer.
From what I can tell, it's basically a really simplified 3D audio API for pre-designed audio scenes. The purpose being to make it simple to implement in hardware (like a Dolby receiver) without requiring content producers (e.g. movie or music creators) to completely restructure their workflow and instead adopt it gradually. It could also serve as a backend to a 3D audio API like OpenAL, although without it also handling DSP effects like occlusion/obstruction and reverb in real-time, I don't imagine it'd save much on CPU costs since it's those things that tend to cost the most (worse, that page mentions "Some previously global effects may now become instanced per dynamic sound object," so it could actually increase CPU usage).
mini_al has support for WASAPI backend with C interface (https://github.com/dr-soft/mini_al)
The WASAPI stuff looks about the same as what I do (at least, the interfaces and methods used), except that the IMMDevice
* interfaces are excluded and something else is used to get the IAudioClient
interface. There's an important caveat with UWP:
#error "The UWP build is currently only supported in C++."
since it seems to require inheriting from and using a templated class object.
At the very least, this gives me some idea for what needs to be done for UWP support.
Btw: You should now be able to target UWP / WASAPI via the SDL2 backend.
Is this still an issue? I see alc/backends/wasapi.*. Should this now allow for targeting UWP?
The WASAPI backend is just the renamed MMDevAPI backend. Given the switch to C++, it should be much more practical to make it work with UWP, though it will need someone to help figure out how to make it work.
The latest commits should allow it to build for UWP targets using the WASAPI backend.