Publish audio from stream or file to room
Hello LiveKit SDK Team,
I've been using this SDK for a while, and I'm looking to implement a feature that allows me to publish audio from a local audio file (ex: .opus,.wav, .mp3... file) or stream buffer to room. Can you give me solution or implement code please.
I'm looking forward to your prompt response.
Thank you very much.
Any update on this? recently a similar issue for iOS (https://github.com/livekit/client-sdk-swift/issues/721) had some progress and am hoping for the android sdk to make the same / similar changes
+1, in a similar situation. I have a PCM stream that I'd like to send to livekit, but not the one from microphone (I'd just have it set to mute). What would be the best way to achieve this? I also see that there's the MixerAudioCallback function, although I'm not sure if that's appropriate for my usecase.
I’m facing the same issue. My application continuously records audio and video from the device’s microphone and camera. When a livestream request arrives, we want to push the recorded video frames (YUV) and audio packets (PCM) to LiveKit.
Pushing video frames is already supported, but we need similar support for audio packets. Currently, LiveKit attempts to open the microphone directly, which causes problems in our setup
Please let me know, if there is some way to achieve this functionality?
I urgently need this feature
I’m facing the same issue. My application continuously records audio and video from the device’s microphone and camera. When a livestream request arrives, we want to push the recorded video frames (YUV) and audio packets (PCM) to LiveKit.
Pushing video frames is already supported, but we need similar support for audio packets. Currently, LiveKit attempts to open the microphone directly, which causes problems in our setup
Please let me know, if there is some way to achieve this functionality?
Have u found any solution?
I’m facing the same issue. My application continuously records audio and video from the device’s microphone and camera. When a livestream request arrives, we want to push the recorded video frames (YUV) and audio packets (PCM) to LiveKit. Pushing video frames is already supported, but we need similar support for audio packets. Currently, LiveKit attempts to open the microphone directly, which causes problems in our setup Please let me know, if there is some way to achieve this functionality?
Have u found any solution?
Hi,
Currently, we are using the native WebRTC library directly instead of LiveKit. To achieve this, we modified the Google WebRTC source code—primarily the following class:
sdk/android/src/java/org/webrtc/audio/WebRtcAudioRecord.java
For LiveKit integration, a similar approach will be required:
- Modify the WebRtcAudioRecord class to add support for passing custom audio packets.
- Build the modified WebRTC library.
- Include this custom WebRTC library in LiveKit’s source code.
- Modify LiveKit’s source code to integrate the custom WebRTC library.
- Build LiveKit and integrate it into your project
This can be achieved with the MixerAudioBufferCallback class.
- Extend it and implement the onBufferRequest to pass in your desired audio bytes.
- Once you've got your subclass, call
audioTrack.setAudioBufferCallback(audioCapturer!!)on the existing microphone track to mix the audio in.
An example implementation can be found here:
Extending MixerAudioBufferCallback: https://github.com/livekit/client-sdk-android/blob/main/livekit-android-sdk/src/main/java/io/livekit/android/audio/ScreenAudioCapturer.kt
Using the callback subclass: https://github.com/livekit/client-sdk-android/blob/d809d4e9c19bd00e5c183a5197a9abc95b50e6db/examples/screenshare-audio/src/main/java/io/livekit/android/example/screenshareaudio/MainViewModel.kt#L65-L77
Doesn't this still require a microphone track to exist? our use case involves explicitly not using the system microphone to avoid everything that comes with it