LocalAIVoiceChat
LocalAIVoiceChat copied to clipboard
Error : CUDA with multiprocessing
Thanks for this good work.
/home/mypc/miniconda3/envs/VoiceAgent/bin/python /home/mypc/Downloads/LocalAIVoiceChat-main/ai_voicetalk_local.py
try to import llama_cpp_cuda
llama_cpp_cuda import failed
llama_cpp_lib: return llama_cpp
Initializing LLM llama.cpp model ...
llama.cpp model initialized
Initializing TTS CoquiEngine ...
Downloading config.json to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/config.json...
Downloading model.pth to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/model.pth...
100%|██████████| 4.36k/4.36k [00:00<00:00, 21.9MiB/s]
100%|██████████| 1.86G/1.86G [03:03<00:00, 10.2MiB/s]
Downloading vocab.json to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/vocab.json...
100%|██████████| 335k/335k [00:00<00:00, 579kiB/s]
Downloading speakers_xtts.pth to /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2/speakers_xtts.pth...
100%|██████████| 7.75M/7.75M [00:00<00:00, 9.87MiB/s]
> Using model: xtts
Error loading model for checkpoint /home/mypc/Downloads/LocalAIVoiceChat-main/models/v2.0.2: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Process Process-1:
Traceback (most recent call last):
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/RealtimeTTS/engines/coqui_engine.py", line 501, in _synthesize_worker
tts = load_model(checkpoint, tts)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/RealtimeTTS/engines/coqui_engine.py", line 485, in load_model
tts.to(device)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1152, in to
return self._apply(convert)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 825, in _apply
param_applied = fn(param)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/torch/cuda/__init__.py", line 288, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/site-packages/RealtimeTTS/engines/coqui_engine.py", line 506, in _synthesize_worker
logging.exception(f"Error initializing main coqui engine model: {e}")
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/logging/__init__.py", line 2113, in exception
error(msg, *args, exc_info=exc_info, **kwargs)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/logging/__init__.py", line 2105, in error
root.error(msg, *args, **kwargs)
File "/home/mypc/miniconda3/envs/VoiceAgent/lib/python3.10/logging/__init__.py", line 1506, in error
self._log(ERROR, msg, args, **kwargs)
TypeError: Log._log() got an unexpected keyword argument 'exc_info'
While running the test script , I am getting above error. Running env. Ubunutu, python 3.10. with latest STT and TTS code.