AVSpeechSynthesizer does not speak after using SFSpeechRecognizer
Hi,
When I use your TTS library and then use a different library with SFSpeechRecognizer that uses audioSession setCategory:AVAudioSessionCategoryRecord... and then I try to use your TTS library again... it doesn't seem to work.
It seems as though the audioSession's Category is still set to AVAudioSessionCategoryRecord.
Do you know if there is a way to reset the audioSession's category to AVAudioSessionCategoryPlayback before running your Tts.speak method or any other way to get your library to work after/before using SFSpeechRecognizer
Thanks.
I don't have a clean solution for this right now, but you might try a workaround. The Tts.setDucking(true) call which has been added to the new 1.3.0 release, will set category to AVAudioSessionCategoryRecord, so you might try calling it after using SFSpeechRecognizer. Let me know if it works for you.
Hi. Sorry for the late reply.
Been testing it out. New setDucking method is great, although AVAudioSessionCategoryPlayback doesn't seem to work.
However, after a few tests, it seems as though AVAudioSessionCategorySoloAmbient works great, which looking at the Apple Docs, seems to be the 'default' category setting.
Would you be able to update the library to use AVAudioSessionCategorySoloAmbient, instead of AVAudioSessionCategoryPlayback, in the setDucking method, please?
// TextToSpeech.m
[session setCategory:AVAudioSessionCategorySoloAmbient
For those who are interested, I'm calling the setDucking before the speak method:
Tts.setDucking(true).then(() => {
Tts.speak('hello')
}
PR: https://github.com/ak1394/react-native-tts/pull/23