webrtc-android
webrtc-android copied to clipboard
using external Audio Input Buffer.
🎯 Goal
The current implementation assumes exclusive control over the microphone. My proposal gives the option to a consumer application using webrtc to have control of the audio, being the microphone or something else. My custom application requires to record audio from the microphone at all times and sometimes connect to webrtc, for this my main application needs to handle the microphone data to record and send over webrtc at the same time. There is a sample ready callback I could use for getting the audio data but that is only available in the middle of a webrtc connection.
🛠 Implementation details
The changes are in JavaAudioDeviceModule and WebRtcAudioRecord class, when useExternalAudioInputBuffer is enabled the AudioRecordThread is not running. For this to work the consumer application will have to: 1- set useExternalAudioInputBuffer true and pass in the external buffer to use when creating JavaAudioDeviceModule 2- notify about new data being ready on that buffer
✍️ Explain examples
audioDeviceModule = JavaAudioDeviceModule.builder(ContextHelper.context).setUseExternalAudioInputBuffer(true) .setExternalAudioInputBuffer(byteBuffer) .createAudioDeviceModule()
audioDeviceModule.notifyExternalDataIsRecorded(bytesRead, captureTimeNs)
Hi @Hector-IoT, sorry for the delayed response. You can create this PR to the GetStream/webrtc repository, as this repository mirrors the native code from it. Once you re-create a PR to the webrtc repository, we will kindly review your PR again soon. Thank you!