Linly-Talker icon indicating copy to clipboard operation
Linly-Talker copied to clipboard

Digital Avatar Conversational System - Linly-Talker. 😄✨ Linly-Talker is an intelligent AI system that combines large language models (LLMs) with visual models to create a novel human-AI interaction...

Results 23 Linly-Talker issues
Sort by recently updated
recently updated
newest added

作者很感谢你做出这个非常棒的项目,但是我想问的是为什么webui中的智能多轮对话系统,文本对话都不行,视频也生成不了,全部报错

比如:运行系统最低配要求是,GPU型号?内存大小?显存大小?cpu型号?硬盘大小?操作系统?等等

Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "/root/miniconda3/lib/python3.8/site-packages/gradio/route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/lib/python3.8/site-packages/gradio/blocks.py", line 1570, in...

Integrate QAnything API as a way to obtain LLM response to input question. Assumes QAnything is set up and running in the same machine (localhost) thus did not include support...

有没有api接入到其他系统 webui.py app.py 启动的是界面 有没有接口传入图片 文本生成视频回来的那种

使用MuseTalk的example时出现以下错误 Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "/root/miniconda3/lib/python3.8/site-packages/gradio/route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/lib/python3.8/site-packages/gradio/blocks.py", line 1561,...

使用的是GPT-SoVITS V2,加载模型报错如下,求指导,谢谢! ``` 模型加载中... /root/Linly-Talker/GPT_SoVITS/pretrained_models/zhiqiang2-e15.ckpt /root/Linly-Talker/GPT_SoVITS/pretrained_models/zhiqiang2_e8_s72.pth Number of parameter: 77.61M Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "/root/miniconda3/lib/python3.8/site-packages/gradio/route_utils.py", line 232, in...

通过CG客户端拉取的v4镜像,在jupyterlab里运行webui.py报错 网上说这种报错一般因为cuda pytorch等版本对不上,但是镜像里会出现这种问题吗 win11就装了显卡驱动和docker desktop,wsl-ubuntu根据CG客户端教程装了cg-client和nvidia-docker 镜像里nvcc报告cuda版本为11.8.89 ``` PaddleTTS Error: No module named 'paddlespeech' 如果使用PaddleTTS,请先安装PaddleTTS环境 pip install -r requirements_paddle.txt 默认不使用LLM模型,直接回复问题,同时减少显存占用! GPT_SoVITS导入失败,原因: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions...