voice icon indicating copy to clipboard operation
voice copied to clipboard

Module stop listening to voice in Android

Open subhendukundu opened this issue 6 years ago • 18 comments
trafficstars

I am not sure if these are the correct behaviours or not but the events work a bit different in iOS and Android. In Android the module stop listening to voice after calling the following methods

Voice.android.js:70 onSpeechPartialResults {value: Array(1)}
Voice.android.js:70 onSpeechPartialResults {value: Array(1)}
Voice.android.js:70 onSpeechPartialResults {value: Array(1)}
Voice.android.js:70 onSpeechPartialResults {value: Array(1)}
Voice.android.js:74 onSpeechEnd {error: false}
Voice.android.js:78 onSpeechResults {value: Array(5)}

Is it possible to make the module listening without making it stop?

Tested Android versions: 9.0.5, 8.0

subhendukundu avatar Jun 04 '19 07:06 subhendukundu

me too, onSpeechEnd is called automatically after voice recognition in Android.

rpf5573 avatar Jun 10 '19 13:06 rpf5573

+1 really need this feature, the time onSpeechEnd is too short.

ducpt2 avatar Jun 15 '19 04:06 ducpt2

Hey Guys,

Is that a new issue introduced in the latest build? We had the Android version of our app working fine while development, but now in the beta version, it's breaking just like the @subhendukundu described above.

Please help we need to make the app live asap.

Thanks Abhi

abhishekarora1028 avatar Jun 24 '19 14:06 abhishekarora1028

Hi Guys,

Any update on this, we are completely stuck with Android version of our app.. could you please get this resolved asap?

Thanks Abhi

abhishekarora1028 avatar Jul 24 '19 09:07 abhishekarora1028

I just ran the example code. The app is able to recognise even if the speech results is coming as errors, and its able to run for a long time.

Jithinqw avatar Sep 18 '19 05:09 Jithinqw

I just ran the example code. The app is able to recognise even if the speech results is coming as errors, and its able to run for a long time.

@Jithinqw do you mean that it continues listening after long time of silence?

teslavitas avatar Sep 24 '19 12:09 teslavitas

Sorry, the process is stopping. I have to call the process again !

Jithinqw avatar Sep 25 '19 06:09 Jithinqw

@subhendukundu Did you solved your problem?

lfoliveir4 avatar Dec 16 '19 12:12 lfoliveir4

Still no update on this? I really need this fixed for android!

StarryFire avatar Jul 01 '20 08:07 StarryFire

I don't think this problem will resolve, but i tried with some config below:

try { await Voice.start('es_US', { EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS: 30000 }); } catch (exception) { console.log(exception, 'exception'); }

It's work about 5-6 seconds before auto stop on Android.

evtuw avatar Aug 27 '20 03:08 evtuw

Does anyone have insight on how to achieve this natively?

safaiyeh avatar Aug 27 '20 16:08 safaiyeh

Did anyone solve this? For me, it stops listening right after onSpeechResults. If i call voice start at the end of onSpeechResults, since there is a lag, part of words spoken get missed. Would be great if someone could help. Have tried "EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS: 30000" but it did not work

nikhilbhawsinka avatar Sep 25 '20 06:09 nikhilbhawsinka

These options don't work on my project too. Is there any solution for this issue.

EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS: 30000

burakharun avatar Sep 28 '20 09:09 burakharun

@nikhilbhawsinka ni @safaiyeh sa @brkhrn brk I resolved this with SpeechToText NativeModules (in Android), you can try this solution.

Create file SpeechToTextModule.java in android/app/src/../apptest/SpeechToTextModule.java like this:

package com.test.apptest;

import android.app.Activity;
import android.content.Intent;
import android.speech.RecognizerIntent;
import android.util.Log;
import android.widget.Toast;

import androidx.annotation.NonNull;

import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.Promise;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
import com.facebook.react.bridge.ReactMethod;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;

public class SpeechToTextModule extends ReactContextBaseJavaModule {
    private static final String DURATION_SHORT_KEY = "SHORT";
    private static final String DURATION_LONG_KEY = "LONG";
    private final int SPEECH_REQUEST_CODE = 123;
    private Promise mPickerPromise;

    public SpeechToTextModule(ReactApplicationContext reactContext) {
        super(reactContext);
        reactContext.addActivityEventListener(mActivityEventListener);
    }

    @Override
    public String getName() {
        return "SpeechToText";  // name to  export native module
    }

    @Override
    public Map<String, Object> getConstants() {
        final Map<String, Object> constants = new HashMap<>();
        constants.put(DURATION_SHORT_KEY, Toast.LENGTH_SHORT);
        constants.put(DURATION_LONG_KEY, Toast.LENGTH_LONG);
        return constants;
    }

    private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {

        @Override
        public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent data) {

            switch (requestCode) {
                case SPEECH_REQUEST_CODE: {
                    if (resultCode == Activity.RESULT_OK && null != data) {

                        ArrayList<String> result = data
                                .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
                        mPickerPromise.resolve(result.get(0));
                    }
                    break;
                }

            }
        }
    };

    @ReactMethod
    public void speak(final Promise promise) {
        Activity currentActivity = getCurrentActivity();

        if (currentActivity == null) {
            mPickerPromise.reject("Your device is not supported!");
            mPickerPromise = null;
            return;
        }

        mPickerPromise = promise;

        Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
                RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en_US");
        try {
            this.getReactApplicationContext().addActivityEventListener(mActivityEventListener);
            currentActivity.startActivityForResult(intent, SPEECH_REQUEST_CODE);
        } catch (Exception e) {
            mPickerPromise.reject("Your device is not supported!");
            mPickerPromise = null;
        }
    }
}

Then create file ModuleSTT.java in android/app/src/../apptest/ModuleSTT.java like this:

package com.test.apptest;

import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

public class ModuleSTT implements ReactPackage {
    @Override
    public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
        return Collections.emptyList();
    }

    @Override
    public List<NativeModule> createNativeModules(
            ReactApplicationContext reactContext) {
        List<NativeModule> modules = new ArrayList<>();

        modules.add(new SpeechToTextModule(reactContext));

        return modules;
    }
}

Next, import ModuleSTT in MainApplication.java like this: import com.test.apptest.ModuleSTT; In getPackages function, add this: packages.add(new ModuleSTT()), then react-native run-android.

Next, create file SpeechToText.js in your project like this content:

import { NativeModules } from 'react-native';
module.exports = NativeModules.SpeechToText;

Code js:

import SpeechToText from './SpeechToText.js'
if (Platform.OS === 'android') {
      await SpeechToText.speak()
        .then(async response => {
          console.log(response, 'response speech');
          await this.setState({ result: response, keySearch: response });
          // do anything you want
        })
        .catch(error => {
          console.log(error);
        });
    }

Sorry about my English and hope help you resolve this.

evtuw avatar Sep 30 '20 09:09 evtuw

@anhnd11 could you create a PR with these changes

safaiyeh avatar Oct 01 '20 03:10 safaiyeh

Hi @anhnd11, I tried but the recogniser still shuts off after a couple of seconds. Can you please let me know what needs to be done here?

nikhilbhawsinka avatar Oct 16 '20 07:10 nikhilbhawsinka

I have somehow hacked a solution, not sure if its a fit all thing. What you can do is call _startRecognizing, in loop, this will stop it from stopping, but create an irritating beeping noise. To mute that noise, use:

AudioManager mAudioManager =(AudioManager) this.reactContext.getSystemService(Context.AUDIO_SERVICE); mAudioManager.adjustStreamVolume(AudioManager.STREAM_NOTIFICATION, AudioManager.ADJUST_MUTE,0);

nikhilbhawsinka avatar Nov 13 '20 09:11 nikhilbhawsinka

I have somehow hacked a solution, not sure if its a fit all thing. What you can do is call _startRecognizing, in loop, this will stop it from stopping, but create an irritating beeping noise. To mute that noise, use:

AudioManager mAudioManager =(AudioManager) this.reactContext.getSystemService(Context.AUDIO_SERVICE); mAudioManager.adjustStreamVolume(AudioManager.STREAM_NOTIFICATION, AudioManager.ADJUST_MUTE,0);

Where do we write this? nikhilbhawsinka

aliraza96 avatar Aug 22 '23 10:08 aliraza96