android how enable echo cancellation, noisesuppressor ?
android AcousticEchoCanceler, NoiseSuppressor only working audiosessionid but i can't get audiosessionid
how to get audiosessionid? or other suggestion?
@sdroege @Rugvip ?
more question audio
i check opus sample rate
01-15 01:58:25.250 1058-1592/? D/audio_hw_primary: IN: sample_rate(16000), channel(1), format(1) 01-15 01:58:25.260 1058-10968/? I/audio_hw_primary: adev->in_device(4), val(4) 01-15 01:58:25.260 1058-10968/? V/audio_hw_primary: stat : /proc/asound/card1 : -1 type: 1 01-15 01:58:25.260 1058-10968/? V/audio_hw_primary: stat : /proc/asound/card2 : -1 type: 1 01-15 01:58:25.290 1058-10968/? I/audio_hw_primary: start_input_stream, channel(2), samplerate(48000)
where is mic in sample rate? default is 16000 is right? i want set 8000
thanks
To set up things correctly for AEC/NS, you need something like my work at (rebased on current master, needs testing):
https://github.com/ford-prefect/openwebrtc/tree/aec
You will also need to do a setMode() on the AudioManager using https://developer.android.com/reference/android/media/AudioManager.html#setMode%28int%29
I should also mention that the AEC branch should be easily fixable for OS X/iOS as well. @ikonst mentioned that just replacing kAudioUnitSubType_HALOutput or kAudioUnitSubType_RemoteIO with kAudioUnitSubType_VoiceProcessingIO in the osxaudiosrc element (controlled by a property, of course).
@ford-prefect i tested aec but not working
this case use android AudioManager setmode it's set globle session, Already in use
but aec, ns is changed audiosession mabye api level 16
public static AcousticEchoCanceler create (int audioSession) Added in API level 16
Creates an AcousticEchoCanceler and attaches it to the AudioRecord on the audio session specified. Parameters audioSession system wide unique audio session identifier. The AcousticEchoCanceler will be applied to the AudioRecord with the same audio session. Returns
AcousticEchoCanceler created or null if the device does not implement AEC.
Is there any good idea?
I'm not super familiar with this, but my overall understanding is that using the stream type setting and AudioManager mode will effectively set up the appropriate filtering for you based on the inputs/outputs and their expected echo path.
Using AcousticEchoCanceler and NoiseSuppressor is useful when you want to attach these effects manually and gain more control over the effect chain.
@ford-prefect What version are you using gstreamer?
i'm use https://github.com/EricssonResearch/cerbero/commit/c64ec4f440f3eadc64661023b5cec26fd2b161b0
i found g_log I/g_log: value "((GstOpenSLESRecordingPreset) 1612188603)" of type 'GstOpenSLESRecordingPreset' is invalid or out of range for property 'preset' of type 'GstOpenSLESRecordingPreset'
Ugh, that must've been from an older version of the code. I've force pushed out an update (slightly hacky). @nakyup see if that works?
@sdroege, others: what's the preferred way for setting enum properties? Hard code the value (what size is expected, then?), or set it up as a string GValue and do a g_object_set_property()?
@ford-prefect As a result working, but not perfect.
I Changed local/owr_audio_renderer.c #if defined(APPLE) && TARGET_OS_IPHONE #define SINK_BUFFER_TIME G_GINT64_CONSTANT(80000) #else #define SINK_BUFFER_TIME G_GINT64_CONSTANT(80000) // orig 20000 #endif
I have test phone 15 species.(korean ver. Samsung, LG, Pantech, etc )
some device is working and some device is not working....
Under check it does not operate in any conditions.
Test and then we will report.
some case audio fail
01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: ==== Error message start ==== 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: Error in element audio-renderer-sink. 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: Error: The stream is in the wrong format. 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: Debugging info: gstaudiobasesink.c(1139): gst_audio_base_sink_wait_event (): /GstPipeline:media-renderer-0/GstBin:audio-renderer-bin-0/GstOpenSLESSink:audio-renderer-sink: Sink not negotiated before gap event. 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: ==== Error message stop ==== 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: ==== Error message start ==== 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: Error in element audio-renderer-sink. 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: Error: The stream is in the wrong format. 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: Debugging info: gstaudiobasesink.c(1139): gst_audio_base_sink_wait_event (): /GstPipeline:media-renderer-0/GstBin:audio-renderer-bin-0/GstOpenSLESSink:audio-renderer-sink: Sink not negotiated before gap event. 01-26 14:24:38.390 19091-19143/kr.co.netseason.myclebot E/g_printerr: ==== Error message stop ====
The fundamental problem was caused because of a buffer overflow buffer-time and latency-time. So it seems that the voice noise and echoes.
Is the best android buffer-time and latency-time setup?
We'll probably need to pick a large number defensively, so it works across the most devices. Did you find that 80ms worked across everything you tried, or did you have to tweak it further?
I found gst-plugins-bad-1.0-1.7/sys/opensles/openslessrc.c gst_opensles_src_init (GstOpenSLESSrc * src) { /* Override some default values to fit on the AudioFlinger behaviour of
-
processing 20ms buffers as minimum buffer size. */ GST_AUDIO_BASE_SRC (src)->buffer_time = 200000; GST_AUDIO_BASE_SRC (src)->latency_time = 20000;
src->preset = DEFAULT_PRESET; }
openwebrtc/local/owr_audio_renderer.c #if defined(APPLE) && TARGET_OS_IPHONE #define SINK_BUFFER_TIME G_GINT64_CONSTANT(80000) #else #define SINK_BUFFER_TIME G_GINT64_CONSTANT(80000) #endif
g_object_set(sink, "buffer-time", SINK_BUFFER_TIME,
"latency-time", G_GINT64_CONSTANT(20000), NULL);
and add openwebrtc/local/owr_local_media_source.c in case OWR_MEDIA_TYPE_AUDIO: /* Default values for buffer-time and latency-time on android are 200ms and 20ms. The minimum latency-time that can be used on Android is 20ms, and using a 40ms buffer-time with a 20ms latency-time causes crackling audio. So let's just stick with the defaults. */
I check buffer-time 40000 ~ 200000 latency-timee 80 ~ 20000
Just to make sure I understand what you're saying correctly -- buffer-time = 80000 and latency-time = 20000 works?
case orig openwebrtc android buffer-time is 20000 #if defined(APPLE) && TARGET_OS_IPHONE #define SINK_BUFFER_TIME G_GINT64_CONSTANT(80000) #else #define SINK_BUFFER_TIME G_GINT64_CONSTANT(20000) #endif
in case voice is not good i changed 20000 to 80000 and good voice this case i found android log buffer overflow & occurs voice echo .
so i try to buffer-time(40000 ~ 200000) and latency-time(80 ~ 20000)
latency-time = 20000 : working but same buffer overflow
@superdump @ford-prefect I'm trying to build a Karaoke android app to record the sounds while playing a karaoke beat simultaneously from the speaker of my phone. The resulted audio file has the karaoke beat part too loud. Is there any way your Lib can help solve this problem by controlling the beat part volume properly? Thanks a lot!