Add voice commands dynamically In unity
Filing on behalf of another, reference #21759184
In order to use the voice command stuff in the MRTK, you have to define the actions and keywords up front (i.e. in the profile/editor). These then get fed to the dictation/speech recognizers (i.e. the array of words). There's a request here to be able to change these up at runtime, to handle the case where dynamically created buttons can also get voice commands.
The tricky part about this one is the re-initialization of the speech providers needed to accomplish this (i.e. the set of keywords/voice commands are created at startup when new-ing up the keyword recognizer)
is there any update on this feature?
This should be covered by #8310