voice icon indicating copy to clipboard operation
voice copied to clipboard

[iOS] Not working after the first run

Open nscharrenberg opened this issue 3 years ago • 5 comments

This package doesn't seem to be working correctly on the last react-native and iOS verion. Working properly on the first run, but once it has been stopped, it won't be able to run again. No exceptions are given either.

Does somebody experience the same issue or know what the problem is (my end or something with the package) and how to solve it?

note: It is working as expected for Android.

Expected Behavior

iOS' Voice Recognition should return results about what I said, anytime i call await Voice.start('en-US'); and stop whenever I call await Voice.stop();, and firing the corresponding listeners.

Actual Behavior

First time i call await Voice.start('en-US'), (i.e. when clicking a button), it returns the partial results and fires the corresponding listeners.

However, the second time (and third so forth) when i fire await Voice.start('en-US') (i.e. pressing the button again) it doesn't do anything. The listeners aren't fired off either (so no console outputs are given), the start method itself doesn't throw an error but instead returns undefined.

Steps to Reproduce the Problem

  1. npm i @react-native-community/voice --save
  2. npx pod-install
  3. add NSMicrophoneUsageDescriptionand NSSpeechRecognitionUsageDescription to the info.plist like the example has it.
  4. Create a component:
import React, { useState, useEffect } from 'react'
import PropTypes from 'prop-types'
import { connect } from 'react-redux'
import { BodyText, Button } from 'theme-components'
import Voice from '@react-native-community/voice';
import { clearTimeout } from '../../utils/Timer'

const Chatbot = (props) => {
  const [messages, setMessages] = useState("");
  const [answer, setAnswer] = useState("");
  const [error, setError] = useState("")
  const [isListening, setIsListening] = useState(false);

  ...

  useEffect(() => {
    Voice.onSpeechStart = _onSpeechStart;
    Voice.onSpeechEnd = _onSpeechEnd;
    Voice.onSpeechResults = _onSpeechResults;
    Voice.onSpeechError = _onSpeechError;

    return () => {
      Voice.destroy().then(Voice.removeAllListeners).catch(e => {
        console.log("UNABLE TO DESTROY");
        console.log(e.error);
      });
    }
  }, []);

  const _onSpeechStart = () => {
    console.log("_onSpeechStart");
    setMessages("");
    setError("");
  }

  const _onSpeechEnd = () => {
    console.log("_onSpeechEnd");
  }

  const _onSpeechResults = (e) => {
    console.log("_onSpeechResults");

    setMessages(e.value[0]);

    if(timeout) {
      clearTimeout(timeout);
    }

    timeout = setTimeout(handleTimeout, continueDelay);
  }

  const _onSpeechError = (e) => {
    console.log(_onSpeechError);
    console.log(e.error);
    setError(e.error);
  }

  const _stopListening = () => {
    Voice.stop().then(res => {
      console.log("Voice Stopped");

      if(messages !== "") {
        setAnswer(ask(messages, options));
      }
    }).catch(e => {
      console.log(e.error);
    });

    setIsListening(false);
  }

  let timeout;
  const initDelay = 3000;
  const continueDelay = 300;

  const handleTimeout = () => {
    _stopListening();
  }

  const _startListening = () => {
    setMessages("");
    setError("");

    Voice.start('en-US').then(res => {
      timeout = setTimeout(handleTimeout, initDelay);
    }).catch(e => {
      console.log(e.error);
    });

    setIsListening(true);
  }

  const _initSpeech = () => {
    if(isListening) {
      _stopListening();
    } else {
      _startListening();
    }
  }
   ...

  return (
    <View>
      <BodyText>Question: {JSON.stringify(messages)}</BodyText>
      <BodyText>Answer: {JSON.stringify(answer)}</BodyText>
      <BodyText>Error: {JSON.stringify(error)}</BodyText>
      <BodyText>{isListening ? 'listening...' : 'Not Listening...'}</BodyText>
      <Button
        style={styles.button}
        text="Ask"
        onPress={_initSpeech}
      />
    </View>
  )
}

const mapStateToProps = () => ({
  ...
})

const mapDispatchToProps = {
  ...
}

export default connect(mapStateToProps, mapDispatchToProps)(Chatbot)
  1. Build and run the app (per react-native documentation) on IPad Air (3rd gen) emulator.
  2. Go to the component and click the "Ask" button.
  3. Say something and "message" should display what you said and stop listening.
  4. Click the "Ask" button again. It'll now do nothing (besides trying to stop listening as the timeout hits), however no listener for _onSpeechStart or '_onSpeechResults' was fired.

Specifications

Development Info:

  • Laptop: Macbook Pro (2015)
  • Platform: Catalina 10.15.6
  • xcode: 11.7
  • Node: v10.20.1
  • npm: 6.14.4
  • @react-native-community/voice version 1.1.9
  • react 16.11.0
  • react-native 0.62.2

Simulator info:

  • Model: IPad Air (3rd gen)
  • Software version: 13.7

nscharrenberg avatar Sep 16 '20 13:09 nscharrenberg

I had a similar issue and was able to resolve it recently. Just to clarify, are you sure the listener is being stopped before you hit "Ask" to start it again?

svm1 avatar Oct 22 '20 19:10 svm1

I had exactly the same problem and took me half day to solve this out. Thanks to @svm1 for hint.

I was following several tutorials where code for assigning listeners was done by using useEffect on mounting component like in code sample above.

useEffect(() => {
    Voice.onSpeechStart = _onSpeechStart;
    Voice.onSpeechEnd = _onSpeechEnd;
    Voice.onSpeechResults = _onSpeechResults;
    Voice.onSpeechError = _onSpeechError;

    return () => {
      Voice.destroy().then(Voice.removeAllListeners).catch(e => {
        console.log("UNABLE TO DESTROY");
        console.log(e.error);
      });
    }
  }, []);

But when I unmounted my component for voice recognition every second and following mount with new voice recognition was just triggering onSpeechStart event and nothing more. So there is something odd going on. Normally I would say ok, that's a problem of dependency in useEffect, but that would be case for mounting and working only once while second and later voice recognition starts would be deaf due to rerender. Am I right? When I load my component I get 3 rerenders and it works first and any times later. But if I unmount voice recognition component and than mount it again it stay deaf.

I have solved this issue by listeners reassignment before start of voice recognition:

const startRecognition = () => {
        console.log('startRecognition')
        Voice.onSpeechEnd = onSpeechEnd
        Voice.onSpeechResults = onSpeechResults
        Voice.onSpeechError = onSpeechError
        Voice.onSpeechPartialResults = onSpeechPartialResults
        Voice.onSpeechVolumeChanged = onSpeechVolumeChanged
        Voice.start('cs-CZ').catch((e) => console.log('ERROR start: ' + e))

}

I am not sure that this is correct approach. Could some one take a look and explain where did I go wrong so we could learn from our mistakes?

import Voice, { SpeechEndEvent, SpeechErrorEvent, SpeechResultsEvent } from '@react-native-community/voice'
import * as Permissions from 'expo-permissions'
import { usePermissions } from 'expo-permissions'
import React, { useEffect, useState } from 'react'
import { Button, StyleSheet, Text, TouchableWithoutFeedback, View } from 'react-native'

const VoicedInput = (): JSX.Element => {
    const [index, setIndex] = useState(0)
    const [uin, setUin] = useState<string[]>([])
    const [speachResult, setSpeachResult] = useState<string[]>(['init', 'value'])
    const [isVoiceAvailable, setIsVoiceAvailable] = useState(false)
    const [isRecognizing, setIsRecognizing] = useState(false)
    const [intervalx, setIntervalx] = useState<NodeJS.Timer | null>(null)

    const [permission, askForPermission] = usePermissions(Permissions.AUDIO_RECORDING, { ask: true })

    const int = (enabled: boolean) => {
        if (enabled) {
            const x = setInterval(() => {
                console.log('Interval')
                Voice.isRecognizing().then((state) => {
                    setIsRecognizing(!!state)
                    if (state == 0) {
                        console.log('here')
                        clearInterval(x)
                    }
                    console.log('state ' + state)
                })
            }, 1000)
            setIntervalx(x)
        } else {
            if (isRecognizing && intervalx !== null)
                clearInterval(intervalx)
        }
    }


    useEffect(() => {
        console.log('loading...')
        // Voice.onSpeechEnd = onSpeechEnd
        // Voice.onSpeechResults = onSpeechResults
        // Voice.onSpeechError = onSpeechError
        // Voice.onSpeechPartialResults = onSpeechPartialResults
        // Voice.onSpeechVolumeChanged = onSpeechVolumeChanged
       
        return () => {
            Voice.destroy().then(Voice.removeAllListeners).catch(() => console.log('ERROR Destroy'))
            console.log('destroyed')
        }
    }, [])

    if (!permission || permission.status !== 'granted') {
        return (
            <View>
                <Text>Permission is not granted</Text>
                <Button title="Grant permission" onPress={askForPermission} />
            </View>
        )
    }
    
    Voice.isAvailable().then(() => setIsVoiceAvailable(true)).catch((e) => { console.log('ERROR isAvailable') })

    const startRecognition = () => {
        console.log('startRecognition')
        Voice.onSpeechEnd = onSpeechEnd
        Voice.onSpeechResults = onSpeechResults
        Voice.onSpeechError = onSpeechError
        Voice.onSpeechPartialResults = onSpeechPartialResults
        Voice.onSpeechVolumeChanged = onSpeechVolumeChanged
        Voice.start('cs-CZ').catch((e) => console.log('ERROR start: ' + e))
        int(true)
    }

    const stopRecognition = () => {
        Voice.stop()
        int(false)
        setIsRecognizing(false)
    }

       const onSpeechVolumeChanged = (event) => {
        console.log(event.value)
    }

    const onSpeechResults = (event: SpeechResultsEvent) => {
        console.log('onSpeechResults: ' + event.value)
    }

    const onSpeechPartialResults = (event: SpeechResultsEvent) => {
        console.log('onSpeechPartialResults')
        if (event.value) {
            setSpeachResult(event.value)
        }
    }

    const onSpeechEnd = (event: SpeechEndEvent) => {
        console.log('onSpeechEnd')
    }

    const onSpeechError = (event: SpeechErrorEvent) => {
        console.log('onSpeechError' + event.error?.message)
    }

    console.log('I have rendered')
    return (
        <View style={{ flex: 1 }}>
            {isVoiceAvailable ? <Text style={{ color: 'green' }}>Voice service is available</Text> : <Text style={{ color: 'red' }}>Voice service is unavailable</Text>}
            <Text>Result: {speachResult.map((res) => res + ' ')}</Text>
            <Text>{'isRecognizing: ' + isRecognizing}</Text>
            <Text>{'permission: ' + permission.status}</Text>
            {!isRecognizing && isVoiceAvailable && <Button onPress={startRecognition} title={'Start Voice Recognition'} />}
            {isRecognizing && isVoiceAvailable && <Button onPress={stopRecognition} title={'Stop'} />}
        </View>
    )
}

export default VoicedInput

Update: removed unnecessary stuff from example.

RambousekTomas avatar Dec 02 '20 20:12 RambousekTomas

Dear @Infrus, Which unnecessary stuff do you mean that I should remove?

amerllica avatar Jun 21 '22 12:06 amerllica

@RambousekTomas you are my fucking hero, I was about to throw my laptop out of my seventh floor window.

Guuri11 avatar Jul 28 '23 09:07 Guuri11

@nscharrenberg check this, please https://github.com/react-native-voice/voice/issues/299#issuecomment-1988651342

barghi avatar Mar 11 '24 15:03 barghi