futurewin
futurewin
**Describe the bug** start can connet to ollama can connet to llama3:latest send the first msg say "hi" to modal, the modal can work fine. but send the second msg...
**Describe the bug** use Docker to install backend ,but after run "docker run perplexica-perplexica-backend" ,show this message " yarn run v1.22.19 $ node dist/app.js info: WebSocket server started on port...
模型无法分清哪个是哪个, 表情同时作用到同一个画面的所有角色上
**Describe the bug** The first character of each response will appear repeatedly. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down...
such as Nvidia chatRTX , can speak to text with llms to do more things, i think you can impove it