aiChatGPT35User123

Results 7 comments of aiChatGPT35User123

> hello, I am trying to replicate GraphRAG Demo on Intel Arc GPU 770, But getting below issue : > > I am facing issue wit mistral : > >...

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File \"/root/graphrag/graphrag/llm/base/base_llm.py\", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/root/graphrag/graphrag/llm/openai/openai_chat_llm.py\", line 58, in _execute_llm\n...

> > {"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 58,...

> > > > {"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py",...

> > when I try to create_base_entity_graph, the error occurs: {"type": "error", "data": "Error Invoking LLM", "stack": " | ExceptionGroup: multiple connection attempts failed (2 sub-exceptions)\n +-+---------------- 1 ----------------\n |...

> bm25的锅,/chatchat-server/chatchat/server/file_rag/retrievers/ensemble.py中的from_vectorstore方法修改,暂时将bm25_retriever初始化的那段代码发光掉,只用faiss_retriever可以明显提速 并没有很大的提升

> > bm25 > > 我刚刚尝试了能够达到原0.2版本的效率 可能是因为我的数据量少,然后0.2版本的时候,知识库问答是在1s之内,现在慢了600ms左右