ai123

Results 143 comments of ai123

请问该如何拍错 ---原始邮件--- 发件人: ***@***.***> 发送时间: 2025年1月20日(周一) 晚上8:23 收件人: ***@***.***>; 抄送: ***@***.******@***.***>; 主题: Re: [labring/FastGPT] 对话记录中查看日志库,页面崩溃 (Issue #3607) 看不出太多内容, — Reply to this email directly, view it on GitHub, or unsubscribe....

Hi @rick-github I have analyzed the relevant logs and placed the log files in this issue, and I have also copied them in issue #7146. Please close either one of...

Due to the setting OLLAMA_SCHED_SPREAD=1 causing all GPU resources not to be released in a timely manner, resulting in other requests failing due to lack of GPU resources for a...

what is your question? ---Original--- From: "Daniel ***@***.***> Date: Thu, Oct 31, 2024 00:13 AM To: ***@***.***>; Cc: ***@***.******@***.***>; Subject: Re: [ollama/ollama] The issue regarding concurrent processing withmultiple GPU cards...

ok,I will close this issue. Thanks guys @rick-github @dhiltgen

When changed the model to glm4:9b ( --ctx-size 128001),there is no information output ### AI debug logs [Info] 2024-10-09 05:08:00 [Vector Queue] Done [Info] 2024-10-09 05:08:00 [QA Queue] Done [Warn]...

Here is a success example with short text content by using glm4:9b ( --ctx-size 128001) ![image](https://github.com/user-attachments/assets/ee2c4e6c-8fc9-46ca-a0fd-ab2d63239489) ### AI DEBUG logs ![image](https://github.com/user-attachments/assets/18aee46b-3c44-4ec0-bc43-870fe4f82ebf) ### ollama debug logs 10月 09 13:24:02 gpu ollama[44027]:...

ollama version is 0.3.11 I used the [FastGPT](https://github.com/labring/FastGPT) I'm certain that this issue is not related to the PDF format. If I use a WORD file with long text content...

@rick-github hi bro, any feedback,pls

Thank you very much for your reply. Could you please take a look at the ollama log information I sent out before? It's strange why short texts can be recognized...