RNGMARTIN

Results 16 comments of RNGMARTIN

@KevinHuSh It's quite urgent. I want to use the function of 0.19 but I cannot use it with this problem exist.

> ``` > OLLAMA_KEEP_ALIVE=-1 ollama serve > ``` > > OR, > > ``` > ollama run llama3.1:70b --keepalive=-1m > ``` > > OR, alter the code back from [5b62687](https://github.com/infiniflow/ragflow/commit/5b626870d05167026049c6ccf122494971459ee5)...

@KevinHuSh I have tried all three ways you had given, none of them works. The problem still exist. I wonder why such problem would exist? For version0.17.0 it was completely...

@KevinHuSh Is there any other plan to fix this problem? sry for bothering so much time. It's really urgent for me.

@KevinHuSh I have updated to the latest code. For some agent using single LLM in ollama it was ok. However, for this agent ,which i use most frequently, the problem...

@KevinHuSh any other plan to solve this problem ? Thanks.

> [@KevinHuSh](https://github.com/KevinHuSh) [@RNGMARTIN](https://github.com/RNGMARTIN) I think part of this problem is that Ollama for whatever reason doesnt release the connection, I am starting to think that this is why GraphRAG fails...

> Once the KB has parsed files, it needs to re-do parsing after switching embedding model. It's a mathmatical restriction. So, if you delete all the exist chunks, you can...

> 在设置 OLLAMA_NUM_GPU=1 并将 OLLAMA 更新到最新版本后,我处理了我的问题 Mine doesn't work. My ollama is 0.9.0, and cannot use ragflow0.19.

![Image](https://github.com/user-attachments/assets/294a029e-5067-4232-9c38-1f142d6ae1e7)亲测有效 可以试试