LocalAIVoiceChat icon indicating copy to clipboard operation
LocalAIVoiceChat copied to clipboard

Local AI talk with a custom voice based on Zephyr 7B model. Uses RealtimeSTT with faster_whisper for transcription and RealtimeTTS with Coqui XTTS for synthesis.

Results 10 LocalAIVoiceChat issues
Sort by recently updated
recently updated
newest added

SeamlessM4Tv2 Released today seems to have all this and translation with streaming support ? Will it be better than Whisper and Coqui ?

Love this project! Was playing around with it. The voice works fine, but stutters. It starts correctly "This is how ..." then stops "voice x", stops "sounds like". What would...

I've been using llama.cpp for quite a while (M1 Mac). Is there a way I can get ai_voicetalk_local.py to point to that installation instead of reinstalling it here? Sorry, newbie...

Also why not stream the responses to the model as they come instead of waiting for the entire response before tts starts?

Couqi Engine takes brakes mid sentence to load. IT takes sometimes between words or even in the middle of say the word. I tried to adjust setting but nothing works....

All libraries seem to have been installed properly. However I get this error when trying to run `C:\Users\USER\code\LocalAIVoiceChat-main>start.bat cuda not available llama_cpp_lib: return llama_cpp Initializing LLM llama.cpp model ... llama.cpp...

log: ``` Traceback (most recent call last): File "C:\Users\f1am3d\miniconda3\envs\localchat\lib\multiprocessing\managers.py", line 802, in _callmethod conn = self._tls.connection AttributeError: 'ForkAwareLocal' object has no attribute 'connection' During handling of the above exception, another...

First of all, awsome repo. I've tried all possible instalations combinations, had failed. Any suggests? @KoljaB Machine: Mac M2 Terminal output: > Using model: xtts Initializing STT AudioToTextRecorder ... [2024-06-05...

Thanks for this good work. ``` /home/mypc/miniconda3/envs/VoiceAgent/bin/python /home/mypc/Downloads/LocalAIVoiceChat-main/ai_voicetalk_local.py try to import llama_cpp_cuda llama_cpp_cuda import failed llama_cpp_lib: return llama_cpp Initializing LLM llama.cpp model ... llama.cpp model initialized Initializing TTS CoquiEngine ......

Hello Author, When I was testing, this error occurred. How can I solve it? `