kakalong136

Results 3 comments of kakalong136

> > same here, I am using gemma 9b, it only have 8k context window. I set local search with 5000 max tokens, and it is back to normal. Otherwise...

> when I try to create_base_entity_graph, the error occurs: {"type": "error", "data": "Error Invoking LLM", "stack": " | ExceptionGroup: multiple connection attempts failed (2 sub-exceptions)\n +-+---------------- 1 ----------------\n | Traceback...

> > 可能是你的LLM返回的json格式有问题 > > 是的,因为我是用ollama启动的模型,在创建模型的时候,温度系数调的高了,导致回复的格式不对 在创建模型的时候,温度系数调的高了 请问这个温度系数是在Rag的setting设置 还是在大模型那里设置????