Documentation regarding audio data handling
I have few questions regarding audio handling.
-
From my limited testing on two macbooks, it seems received audio frames are basically packets of 10ms worth of samples at 48kHz rate and 2 channels. Is the time interval of 10ms standard in libwebrtc or should a client-side app handle any interval and any number of channel and any rate - basically resample each packet to the output device sample rate and mix channels as well.
-
If it's always fixed at 10ms intervals, is this because of libwebrtc's handling of audio data itself and is there any documentation on it? Curious.
-
What about voice capture? Does LiveKit's server side normalize each AudioFrame sent to it using
native_audio_source.capture_frame(&audio_frame)to 2 channels and 48kHz rate before sending it back to subscribed clients?