David Zurow

Results 141 comments of David Zurow

@ileben Oh, I forgot to mention another easy thing to do: Add a global catch-all Dictation Rule with a no-op Action. ``` grammar.add_rule(MappingRule( name = 'noise sink', mapping = {...

kaldi-active-grammar doesn't depend on those, since it is supposed to be agnostic as to the source. If using dragonfly, I think they should be installed with `pip install dragonfly2[kaldi]`.

I should definitely make this more clear. Thanks for pointing it out!

Thanks for the info! FYI, the alternate dictation interface currently allows for performing the recognition on just parts of an utterance: for example "say hello world", where only "hello world"...

@Danesprite Thanks for trying to get it to work! 16kHz 16bit mono is perfect, since that is what Kaldi is using/recording already.

I think this may be related to this Dragonfly issue https://github.com/dictation-toolbox/dragonfly/issues/182. Try adding the following line somewhere early and see if it resolves the problem: `logging.getLogger('action.exec').setLevel(10)`

Great to hear! I've been having to use that line for a while to "fix" this issue. Hopefully eventually we can narrow down the root cause.

I have been meaning to add a simple example of direct usage, but haven't gotten around to it yet. You basically just need to build a graph describing your grammar...

That would be easy to encode in an FST. But in general, it would probably be easier to go through the dragonfly backend, once I get wave file reading implemented.