Ryein Goddard

Results 46 comments of Ryein Goddard

> Thanks for your response. When using this will this stop the audio stream from asterisk server to my websocket server from ending before the call ends?

I see, the method I am also working on as an alternative due to this limitation is using a different method. This plugin - https://github.com/nadirhamid/asterisk-audiofork It provides a continuous audio...

The audiofork transcribe demo is using google's closed source transcription. How would I adapt it? Especially if I wanted to use an open source option.

Ok, I made a script, but I am getting significant slow downs. I've tried configuring beam and other things you have suggested but still the result lags behind. This is...

In my tests I am only doing 1 stream. The results are okay, but the transcription says it is processing in about .2 to .9 ms. The time it takes...

64 gigs That is what the time reports, but I was thinking the process would be asynchronous between transcriptions so they wouldn't build up to taking longer than the person...

Even when using something only locally I get poor results for example - https://github.com/alphacep/vosk-server/tree/master/websocket-microphone python will claim the transcription process only took m.s. but really it takes a few seconds...

for example using boost beast websocket example provided, it takes approxamately 4 seconds for the speech recognition to print. I used the websocket microphone example connected to a remote boost...

1. What is the plugin doing exactly. What part is using features from asterisk. What parts are optional. 2. Why are various modules being used. 3. What pieces are needed...