graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

Errors occurred during the pipeline run, see logs for more details.

Open hanxu49 opened this issue 1 year ago • 8 comments

⠏ GraphRAG Indexer ├── Loading Input (text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ├── create_base_extracted_entities └── create_summarized_entities ├── create_summarized_entities ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━ 82% 0:00:01 0:00:00 ❌ Errors occurred during the pipeline run, see logs for more details.

hanxu49 avatar Jul 05 '24 01:07 hanxu49

If you've followed the getting started guide, you will see that some folders were created where you ran the indexing pipeline. There should be a folder called output\{run_id}\reports\logs.json that contains more information

eyast avatar Jul 05 '24 02:07 eyast

Hi @hanxu49

Could you please add more details? Perhaps used configuration and the logs that @eyast mentions :)

AlonsoGuevara avatar Jul 09 '24 21:07 AlonsoGuevara

I met the same issue. And here is the logs: {"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/graphrag/llm/openai/openai_embeddings_llm.py", line 36, in _execute_llm\n embedding = await self.client.embeddings.create(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/resources/embeddings.py", line 215, in create\n return await self._post(\n ^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/_base_client.py", line 1816, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/_base_client.py", line 1514, in request\n return await self._request(\n ^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/_base_client.py", line 1610, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.NotFoundError: Error code: 404 - {'detail': 'Not Found'}\n", "source": "Error code: 404 - {'detail': 'Not Found'}", "details": {"input": [""OLD GENTLEMAN":The Old Gentleman, a notable character with a distinctive appearance, .......

And my config is as following: encoding_model: cl100k_base skip_workflows: [] llm: api_key: Not Empty type: openai_chat # or azure_openai_chat model: Qwen1.5-14B-Chat model_supports_json: true # recommended if this is available for your model. max_tokens: 24000 api_base: http://localhost:6006/v1

parallelization: stagger: 0.3

async_mode: threaded # or asyncio

embeddings: async_mode: threaded # or asyncio llm: api_key: Not Empty type: openai_embedding # or azure_openai_embedding model: bge-large-zh-v15 api_base: http://localhost:6006

Is that the embedding type uncorrected?If so,what type should I use? This is my local embedding service and I run it in Xinference. Thank you!

cdg1921 avatar Jul 10 '24 14:07 cdg1921

the error log is like this: [ {"type": "error", "data": "Community Report Extraction Error", "stack": "Traceback (most recent call last):\n File "/usr/local/lib/python3.10/dist-packages/graphrag/index/graph/extractors/community_reports/community_reports_extractor.py", line 58, in call\n await self._llm(\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/json_parsing_llm.py", line 34, in call\n result = await self._delegate(input, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/openai_token_replacing_llm.py", line 37, in call\n return await self._delegate(input, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/openai_history_tracking_llm.py", line 33, in call\n output = await self._delegate(input, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/base/caching_llm.py", line 104, in call\n result = await self._delegate(input, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/base/rate_limiting_llm.py", line 177, in call\n result, start = await execute_with_retry()\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/base/rate_limiting_llm.py", line 159, in execute_with_retry\n async for attempt in retryer:\n File "/usr/local/lib/python3.10/dist-packages/tenacity/asyncio/init.py", line 166, in anext\n do = await self.iter(retry_state=self._retry_state)\n File "/usr/local/lib/python3.10/dist-packages/tenacity/asyncio/init.py", line 153, in iter\n result = await action(retry_state)\n File "/usr/local/lib/python3.10/dist-packages/tenacity/_utils.py", line 99, in inner\n return call(*args, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/tenacity/init.py", line 398, in \n self._add_action_func(lambda rs: rs.outcome.result())\n File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result\n return self.__get_result()\n File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result\n raise self._exception\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/base/rate_limiting_llm.py", line 165, in execute_with_retry\n return await do_attempt(), start\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/base/rate_limiting_llm.py", line 147, in do_attempt\n return await self._delegate(input, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/base/base_llm.py", line 48, in call\n return await self._invoke_json(input, **kwargs)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/openai_chat_llm.py", line 82, in _invoke_json\n result = await generate()\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/openai_chat_llm.py", line 74, in generate\n await self._native_json(input, **{**kwargs, "name": call_name})\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/openai_chat_llm.py", line 108, in _native_json\n json_output = try_parse_json_object(raw_output)\n File "/usr/local/lib/python3.10/dist-packages/graphrag/llm/openai/utils.py", line 93, in try_parse_json_object\n result = json.loads(input)\n File "/usr/lib/python3.10/json/init.py", line 346, in loads\n return _default_decoder.decode(s)\n File "/usr/lib/python3.10/json/decoder.py", line 337, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode\n raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n", "source": "Expecting value: line 1 column 1 (char 0)", "details": null}, {"type": "error", "data": "Error executing verb "window" in create_final_community_reports: 'community'", "stack": "Traceback (most recent call last):\n File "/usr/local/lib/python3.10/dist-packages/datashaper/workflow/workflow.py", line 410, in _execute_verb\n result = node.verb.func(**verb_args)\n File "/usr/local/lib/python3.10/dist-packages/datashaper/engine/verbs/window.py", line 73, in window\n window = __window_function_mapwindow_operation\n File "/usr/local/lib/python3.10/dist-packages/pandas/core/frame.py", line 4102, in getitem\n indexer = self.columns.get_loc(key)\n File "/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/range.py", line 417, in get_loc\n raise KeyError(key)\nKeyError: 'community'\n", "source": "'community'", "details": null}, {"type": "error", "data": "Error running pipeline!", "stack": "Traceback (most recent call last):\n File "/usr/local/lib/python3.10/dist-packages/graphrag/index/run.py", line 323, in run_pipeline\n result = await workflow.run(context, callbacks)\n File "/usr/local/lib/python3.10/dist-packages/datashaper/workflow/workflow.py", line 369, in run\n timing = await self._execute_verb(node, context, callbacks)\n File "/usr/local/lib/python3.10/dist-packages/datashaper/workflow/workflow.py", line 410, in _execute_verb\n result = node.verb.func(**verb_args)\n File "/usr/local/lib/python3.10/dist-packages/datashaper/engine/verbs/window.py", line 73, in window\n window = __window_function_mapwindow_operation\n File "/usr/local/lib/python3.10/dist-packages/pandas/core/frame.py", line 4102, in getitem\n indexer = self.columns.get_loc(key)\n File "/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/range.py", line 417, in get_loc\n raise KeyError(key)\nKeyError: 'community'\n", "source": "'community'", "details": null}

]

hanxu49 avatar Jul 11 '24 02:07 hanxu49

I am also getting the same error. When I look into indexing-engine.log I see the following error. " raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}"

Any Idea why it is failing on a non existent URL https://cognitiveservices.azure.com

sashgeorge avatar Jul 12 '24 22:07 sashgeorge

{"type": "error", "data": "Error executing verb "orderby" in create_base_text_units: 'id'", "stack": "Traceback (most recent call last):\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\datashaper\workflow\workflow.py", line 410, in _execute_verb\n result = node.verb.func(**verb_args)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\datashaper\engine\verbs\orderby.py", line 32, in orderby\n output = input_table.sort_values(by=columns, ascending=ascending)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\pandas\core\frame.py", line 7189, in sort_values\n k = self._get_label_or_level_values(by[0], axis=axis)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\pandas\core\generic.py", line 1911, in _get_label_or_level_values\n raise KeyError(key)\nKeyError: 'id'\n", "source": "'id'", "details": null} {"type": "error", "data": "Error running pipeline!", "stack": "Traceback (most recent call last):\n File "D:\ai\git\graphrag\graphrag\index\run.py", line 323, in run_pipeline\n result = await workflow.run(context, callbacks)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\datashaper\workflow\workflow.py", line 369, in run\n timing = await self._execute_verb(node, context, callbacks)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\datashaper\workflow\workflow.py", line 410, in _execute_verb\n result = node.verb.func(**verb_args)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\datashaper\engine\verbs\orderby.py", line 32, in orderby\n output = input_table.sort_values(by=columns, ascending=ascending)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\pandas\core\frame.py", line 7189, in sort_values\n k = self._get_label_or_level_values(by[0], axis=axis)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\pandas\core\generic.py", line 1911, in _get_label_or_level_values\n raise KeyError(key)\nKeyError: 'id'\n", "source": "'id'", "details": null}

zhoufan1978 avatar Jul 13 '24 09:07 zhoufan1978

create_base_entity_graph出错"D:\ProgramData\anaconda3\envs\graphrag\Lib\site-packages\graphrag\llm\openai\openai_chat_llm.py", line 58, in _execute_llm\n return completion.choices[0].message.content\n ^^^^^^^^^^^^^^^^^^\nAttributeError: 'str' object has no attribute 'choices'\n", "source": "'str' object has no attribute 'choices'",

zhoufan1978 avatar Jul 13 '24 11:07 zhoufan1978

when I try to create_base_entity_graph, the error occurs: {"type": "error", "data": "Error Invoking LLM", "stack": " | ExceptionGroup: multiple connection attempts failed (2 sub-exceptions)\n +-+---------------- 1 ----------------\n | Traceback (most recent call last):\n | File "D:\Anaconda\Lib\site-packages\anyio\_core\_sockets.py", line 170, in try_connect\n | stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\site-packages\anyio\_backends\_asyncio.py", line 2258, in connect_tcp\n | await get_running_loop().create_connection(\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1122, in create_connection\n | raise exceptions[0]\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1104, in create_connection\n | sock = await self._connect_sock(\n | ^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1007, in _connect_sock\n | await self.sock_connect(sock, address)\n | File "D:\Anaconda\Lib\asyncio\proactor_events.py", line 729, in sock_connect\n | return await self._proactor.connect(sock, address)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\asyncio\tasks.py", line 385, in __wakeup\n | future.result()\n | File "D:\Anaconda\Lib\asyncio\windows_events.py", line 803, in _poll\n | value = callback(transferred, key, ov)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | .....File "D:\Anaconda\Lib\site-packages\pandas\core\indexers\utils.py", line 390, in check_key_length\n raise ValueError("Columns must be same length as key")\nValueError: Columns must be same length as key\n", "source": "Columns must be same length as key", "details": null} how can I fix this???

DANHONG-stack avatar Jul 18 '24 14:07 DANHONG-stack

same problem, I used Ollama method.

yurochang avatar Jul 20 '24 16:07 yurochang

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 58, in _execute_llm\n return completion.choices[0].message.content\n ~~~~~~~~~~~~~~~~~~^^^\nTypeError: 'NoneType' object is not subscriptable\n", "source": "'NoneType' object is not subscriptable", "details": {"input": "\n-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity, capitalized\n- entity_type: One of the following types: [organization,person,geo,event]\n- entity_description: Comprehensive description of the entity's attributes and activities\nFormat each entity as ("entity"<|><entity_name><|><entity_type><|><entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are clearly related to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n Format each relationship as ("relationship"<|><source_entity><|><target_entity><|><relationship_description><|><relationship_strength>)\n\n3. Return output in English as a single list of all the entities and relationships identified in steps 1 and 2. Use ## as the list delimiter.\n\n4. When finished, output <|COMPLETE|>\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor's authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan's shared commitment to discovery was an unspoken rebellion against Cruz's narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. \u201cIf this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.\u201d\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor's, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n################\nOutput:\n("entity"<|>"Alex"<|>"person"<|>"Alex is a character who experiences frustration and is observant of the dynamics among other characters.")##\n("entity"<|>"Taylor"<|>"person"<|>"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective.")##\n("entity"<|>"Jordan"<|>"person"<|>"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device.")##\n("entity"<|>"Cruz"<|>"person"<|>"Cruz is associated with a vision of control and order, influencing the dynamics among other characters.")##\n("entity"<|>"The Device"<|>"technology"<|>"The Device is central to the story, with potential game-changing implications, and is revered by Taylor.")##\n("relationship"<|>"Alex"<|>"Taylor"<|>"Alex is affected by Taylor's authoritarian certainty and observes changes in Taylor's attitude towards the device."<|>7)##\n("relationship"<|>"Alex"<|>"Jordan"<|>"Alex and Jordan share a commitment to discovery, which contrasts with Cruz's vision."<|>6)##\n("relationship"<|>"Taylor"<|>"Jordan"<|>"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."<|>8)##\n("relationship"<|>"Jordan"<|>"Cruz"<|>"Jordan's commitment to discovery is in rebellion against Cruz's vision of control and order."<|>5)##\n("relationship"<|>"Taylor"<|>"The Device"<|>"Taylor shows reverence towards the device, indicating its importance and potential impact."<|>9)

The same mistake

aiChatGPT35User123 avatar Jul 23 '24 01:07 aiChatGPT35User123

Consolidating alternate model issues here: #657

natoverse avatar Jul 25 '24 00:07 natoverse

I met the same issue. And here is the logs: {"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/graphrag/llm/openai/openai_embeddings_llm.py", line 36, in _execute_llm\n embedding = await self.client.embeddings.create(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/resources/embeddings.py", line 215, in create\n return await self._post(\n ^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/_base_client.py", line 1816, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/_base_client.py", line 1514, in request\n return await self._request(\n ^^^^^^^^^^^^^^^^^^^^\n File "/root/miniconda3/envs/graphrag/lib/python3.11/site-packages/openai/_base_client.py", line 1610, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.NotFoundError: Error code: 404 - {'detail': 'Not Found'}\n", "source": "Error code: 404 - {'detail': 'Not Found'}", "details": {"input": [""OLD GENTLEMAN":The Old Gentleman, a notable character with a distinctive appearance, .......

And my config is as following: encoding_model: cl100k_base skip_workflows: [] llm: api_key: Not Empty type: openai_chat # or azure_openai_chat model: Qwen1.5-14B-Chat model_supports_json: true # recommended if this is available for your model. max_tokens: 24000 api_base: http://localhost:6006/v1

parallelization: stagger: 0.3

async_mode: threaded # or asyncio

embeddings: async_mode: threaded # or asyncio llm: api_key: Not Empty type: openai_embedding # or azure_openai_embedding model: bge-large-zh-v15 api_base: http://localhost:6006

Is that the embedding type uncorrected?If so,what type should I use? This is my local embedding service and I run it in Xinference. Thank you!

解决了吗

chongchongaikubao avatar Jul 25 '24 06:07 chongchongaikubao

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 58, in _execute_llm\n return completion.choices[0].message.content\n ~~~~~~~~~~~~~~~~~~^^^\nTypeError: 'NoneType' object is not subscriptable\n", "source": "'NoneType' object is not subscriptable", "details": {"input": "\n-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity, capitalized\n- entity_type: One of the following types: [organization,person,geo,event]\n- entity_description: Comprehensive description of the entity's attributes and activities\nFormat each entity as ("entity"<|><entity_name><|><entity_type><|><entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are clearly related to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n Format each relationship as ("relationship"<|><source_entity><|><target_entity><|><relationship_description><|><relationship_strength>)\n\n3. Return output in English as a single list of all the entities and relationships identified in steps 1 and 2. Use ## as the list delimiter.\n\n4. When finished, output <|COMPLETE|>\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor's authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan's shared commitment to discovery was an unspoken rebellion against Cruz's narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. \u201cIf this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.\u201d\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor's, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n################\nOutput:\n("entity"<|>"Alex"<|>"person"<|>"Alex is a character who experiences frustration and is observant of the dynamics among other characters.")##\n("entity"<|>"Taylor"<|>"person"<|>"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective.")##\n("entity"<|>"Jordan"<|>"person"<|>"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device.")##\n("entity"<|>"Cruz"<|>"person"<|>"Cruz is associated with a vision of control and order, influencing the dynamics among other characters.")##\n("entity"<|>"The Device"<|>"technology"<|>"The Device is central to the story, with potential game-changing implications, and is revered by Taylor.")##\n("relationship"<|>"Alex"<|>"Taylor"<|>"Alex is affected by Taylor's authoritarian certainty and observes changes in Taylor's attitude towards the device."<|>7)##\n("relationship"<|>"Alex"<|>"Jordan"<|>"Alex and Jordan share a commitment to discovery, which contrasts with Cruz's vision."<|>6)##\n("relationship"<|>"Taylor"<|>"Jordan"<|>"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."<|>8)##\n("relationship"<|>"Jordan"<|>"Cruz"<|>"Jordan's commitment to discovery is in rebellion against Cruz's vision of control and order."<|>5)##\n("relationship"<|>"Taylor"<|>"The Device"<|>"Taylor shows reverence towards the device, indicating its importance and potential impact."<|>9)

The same mistake

解决了吗

chongchongaikubao avatar Jul 25 '24 06:07 chongchongaikubao

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 58, in _execute_llm\n return completion.choices[0].message.content\n ~~~~~~~~~~~~~~~~~~^^^\nTypeError: 'NoneType' object is not subscriptable\n", "source": "'NoneType' object is not subscriptable", "details": {"input": "\n-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity, capitalized\n- entity_type: One of the following types: [organization,person,geo,event]\n- entity_description: Comprehensive description of the entity's attributes and activities\nFormat each entity as ("entity"<|><entity_name><|><entity_type><|><entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are clearly related to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n Format each relationship as ("relationship"<|><source_entity><|><target_entity><|><relationship_description><|><relationship_strength>)\n\n3. Return output in English as a single list of all the entities and relationships identified in steps 1 and 2. Use ## as the list delimiter.\n\n4. When finished, output <|COMPLETE|>\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor's authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan's shared commitment to discovery was an unspoken rebellion against Cruz's narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. \u201cIf this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.\u201d\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor's, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n################\nOutput:\n("entity"<|>"Alex"<|>"person"<|>"Alex is a character who experiences frustration and is observant of the dynamics among other characters.")##\n("entity"<|>"Taylor"<|>"person"<|>"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective.")##\n("entity"<|>"Jordan"<|>"person"<|>"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device.")##\n("entity"<|>"Cruz"<|>"person"<|>"Cruz is associated with a vision of control and order, influencing the dynamics among other characters.")##\n("entity"<|>"The Device"<|>"technology"<|>"The Device is central to the story, with potential game-changing implications, and is revered by Taylor.")##\n("relationship"<|>"Alex"<|>"Taylor"<|>"Alex is affected by Taylor's authoritarian certainty and observes changes in Taylor's attitude towards the device."<|>7)##\n("relationship"<|>"Alex"<|>"Jordan"<|>"Alex and Jordan share a commitment to discovery, which contrasts with Cruz's vision."<|>6)##\n("relationship"<|>"Taylor"<|>"Jordan"<|>"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."<|>8)##\n("relationship"<|>"Jordan"<|>"Cruz"<|>"Jordan's commitment to discovery is in rebellion against Cruz's vision of control and order."<|>5)##\n("relationship"<|>"Taylor"<|>"The Device"<|>"Taylor shows reverence towards the device, indicating its importance and potential impact."<|>9) The same mistake

解决了吗

解决了,又碰到了另一个问题

aiChatGPT35User123 avatar Jul 25 '24 07:07 aiChatGPT35User123

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 58, in _execute_llm\n return completion.choices[0].message.content\n ~~~~~~~~~~~~~~~~~~^^^\nTypeError: 'NoneType' object is not subscriptable\n", "source": "'NoneType' object is not subscriptable", "details": {"input": "\n-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity, capitalized\n- entity_type: One of the following types: [organization,person,geo,event]\n- entity_description: Comprehensive description of the entity's attributes and activities\nFormat each entity as ("entity"<|><entity_name><|><entity_type><|><entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are clearly related to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n Format each relationship as ("relationship"<|><source_entity><|><target_entity><|><relationship_description><|><relationship_strength>)\n\n3. Return output in English as a single list of all the entities and relationships identified in steps 1 and 2. Use ## as the list delimiter.\n\n4. When finished, output <|COMPLETE|>\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor's authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan's shared commitment to discovery was an unspoken rebellion against Cruz's narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. \u201cIf this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.\u201d\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor's, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n################\nOutput:\n("entity"<|>"Alex"<|>"person"<|>"Alex is a character who experiences frustration and is observant of the dynamics among other characters.")##\n("entity"<|>"Taylor"<|>"person"<|>"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective.")##\n("entity"<|>"Jordan"<|>"person"<|>"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device.")##\n("entity"<|>"Cruz"<|>"person"<|>"Cruz is associated with a vision of control and order, influencing the dynamics among other characters.")##\n("entity"<|>"The Device"<|>"technology"<|>"The Device is central to the story, with potential game-changing implications, and is revered by Taylor.")##\n("relationship"<|>"Alex"<|>"Taylor"<|>"Alex is affected by Taylor's authoritarian certainty and observes changes in Taylor's attitude towards the device."<|>7)##\n("relationship"<|>"Alex"<|>"Jordan"<|>"Alex and Jordan share a commitment to discovery, which contrasts with Cruz's vision."<|>6)##\n("relationship"<|>"Taylor"<|>"Jordan"<|>"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."<|>8)##\n("relationship"<|>"Jordan"<|>"Cruz"<|>"Jordan's commitment to discovery is in rebellion against Cruz's vision of control and order."<|>5)##\n("relationship"<|>"Taylor"<|>"The Device"<|>"Taylor shows reverence towards the device, indicating its importance and potential impact."<|>9) The same mistake

解决了吗

解决了,又碰到了另一个问题

请问是怎么解决的呢?

yuanzhiyong1999 avatar Jul 25 '24 07:07 yuanzhiyong1999

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "/root/graphrag/graphrag/llm/base/base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/root/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 58, in _execute_llm\n return completion.choices[0].message.content\n ~~~~~~~~~~~~~~~~~~^^^\nTypeError: 'NoneType' object is not subscriptable\n", "source": "'NoneType' object is not subscriptable", "details": {"input": "\n-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity, capitalized\n- entity_type: One of the following types: [organization,person,geo,event]\n- entity_description: Comprehensive description of the entity's attributes and activities\nFormat each entity as ("entity"<|><entity_name><|><entity_type><|><entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are clearly related to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n Format each relationship as ("relationship"<|><source_entity><|><target_entity><|><relationship_description><|><relationship_strength>)\n\n3. Return output in English as a single list of all the entities and relationships identified in steps 1 and 2. Use ## as the list delimiter.\n\n4. When finished, output <|COMPLETE|>\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor's authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan's shared commitment to discovery was an unspoken rebellion against Cruz's narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. \u201cIf this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.\u201d\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor's, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n################\nOutput:\n("entity"<|>"Alex"<|>"person"<|>"Alex is a character who experiences frustration and is observant of the dynamics among other characters.")##\n("entity"<|>"Taylor"<|>"person"<|>"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective.")##\n("entity"<|>"Jordan"<|>"person"<|>"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device.")##\n("entity"<|>"Cruz"<|>"person"<|>"Cruz is associated with a vision of control and order, influencing the dynamics among other characters.")##\n("entity"<|>"The Device"<|>"technology"<|>"The Device is central to the story, with potential game-changing implications, and is revered by Taylor.")##\n("relationship"<|>"Alex"<|>"Taylor"<|>"Alex is affected by Taylor's authoritarian certainty and observes changes in Taylor's attitude towards the device."<|>7)##\n("relationship"<|>"Alex"<|>"Jordan"<|>"Alex and Jordan share a commitment to discovery, which contrasts with Cruz's vision."<|>6)##\n("relationship"<|>"Taylor"<|>"Jordan"<|>"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."<|>8)##\n("relationship"<|>"Jordan"<|>"Cruz"<|>"Jordan's commitment to discovery is in rebellion against Cruz's vision of control and order."<|>5)##\n("relationship"<|>"Taylor"<|>"The Device"<|>"Taylor shows reverence towards the device, indicating its importance and potential impact."<|>9) The same mistake

解决了吗

解决了,又碰到了另一个问题

请问是怎么解决的呢?

我是因为模型名称和模型url没对应

aiChatGPT35User123 avatar Jul 25 '24 07:07 aiChatGPT35User123

when I try to create_base_entity_graph, the error occurs: {"type": "error", "data": "Error Invoking LLM", "stack": " | ExceptionGroup: multiple connection attempts failed (2 sub-exceptions)\n +-+---------------- 1 ----------------\n | Traceback (most recent call last):\n | File "D:\Anaconda\Lib\site-packages\anyio_core_sockets.py", line 170, in try_connect\n | stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\site-packages\anyio_backends_asyncio.py", line 2258, in connect_tcp\n | await get_running_loop().create_connection(\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1122, in create_connection\n | raise exceptions[0]\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1104, in create_connection\n | sock = await self._connect_sock(\n | ^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1007, in _connect_sock\n | await self.sock_connect(sock, address)\n | File "D:\Anaconda\Lib\asyncio\proactor_events.py", line 729, in sock_connect\n | return await self._proactor.connect(sock, address)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\asyncio\tasks.py", line 385, in __wakeup\n | future.result()\n | File "D:\Anaconda\Lib\asyncio\windows_events.py", line 803, in _poll\n | value = callback(transferred, key, ov)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | .....File "D:\Anaconda\Lib\site-packages\pandas\core\indexers\utils.py", line 390, in check_key_length\n raise ValueError("Columns must be same length as key")\nValueError: Columns must be same length as key\n", "source": "Columns must be same length as key", "details": null} how can I fix this???

请问解决了吗

kakalong136 avatar Jul 27 '24 08:07 kakalong136

when I try to create_base_entity_graph, the error occurs: {"type": "error", "data": "Error Invoking LLM", "stack": " | ExceptionGroup: multiple connection attempts failed (2 sub-exceptions)\n +-+---------------- 1 ----------------\n | Traceback (most recent call last):\n | File "D:\Anaconda\Lib\site-packages\anyio_core_sockets.py", line 170, in try_connect\n | stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\site-packages\anyio_backends_asyncio.py", line 2258, in connect_tcp\n | await get_running_loop().create_connection(\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1122, in create_connection\n | raise exceptions[0]\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1104, in create_connection\n | sock = await self._connect_sock(\n | ^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\asyncio\base_events.py", line 1007, in _connect_sock\n | await self.sock_connect(sock, address)\n | File "D:\Anaconda\Lib\asyncio\proactor_events.py", line 729, in sock_connect\n | return await self._proactor.connect(sock, address)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | File "D:\Anaconda\Lib\asyncio\tasks.py", line 385, in __wakeup\n | future.result()\n | File "D:\Anaconda\Lib\asyncio\windows_events.py", line 803, in _poll\n | value = callback(transferred, key, ov)\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | .....File "D:\Anaconda\Lib\site-packages\pandas\core\indexers\utils.py", line 390, in check_key_length\n raise ValueError("Columns must be same length as key")\nValueError: Columns must be same length as key\n", "source": "Columns must be same length as key", "details": null} how can I fix this???

请问解决了吗

首先看下你的LLM,有没有成功运行,如果成功运行的话,将graphrag-local-ollama\graphrag\index\verbs\graph\clustering\cluster_graph.py,文件替换为

Copyright (c) 2024 Microsoft Corporation.

Licensed under the MIT License

"""A module containing cluster_graph, apply_clustering and run_layout methods definition."""

import logging from enum import Enum from random import Random from typing import Any, cast

import networkx as nx import pandas as pd from datashaper import TableContainer, VerbCallbacks, VerbInput, progress_iterable, verb

from graphrag.index.utils import gen_uuid, load_graph

from .typing import Communities

log = logging.getLogger(name)

@verb(name="cluster_graph") def cluster_graph( input: VerbInput, callbacks: VerbCallbacks, strategy: dict[str, Any], column: str, to: str, level_to: str | None = None, **_kwargs, ) -> TableContainer: output_df = cast(pd.DataFrame, input.get_input()) results = output_df[column].apply(lambda graph: run_layout(strategy, graph))

community_map_to = "communities"
output_df[community_map_to] = results

level_to = level_to or f"{to}_level"
output_df[level_to] = output_df.apply(
    lambda x: list({level for level, _, _ in x[community_map_to]}), axis=1
)
output_df[to] = [None] * len(output_df)

num_total = len(output_df)

graph_level_pairs_column: list[list[tuple[int, str]]] = []
for _, row in progress_iterable(
    output_df.iterrows(), callbacks.progress, num_total
):
    levels = row[level_to]
    graph_level_pairs: list[tuple[int, str]] = []

    for level in levels:
        graph = "\n".join(
            nx.generate_graphml(
                apply_clustering(
                    cast(str, row[column]),
                    cast(Communities, row[community_map_to]),
                    level,
                )
            )
        )
        graph_level_pairs.append((level, graph))
    graph_level_pairs_column.append(graph_level_pairs)

output_df[to] = graph_level_pairs_column

# explode the list of (level, graph) pairs into separate rows
output_df = output_df.explode(to, ignore_index=True)

graph_level_pairs = output_df[to].tolist()

# 调试输出
log.debug("graph_level_pairs: %s", graph_level_pairs)
log.debug("output_df: %s", output_df)
log.debug("Length of graph_level_pairs: %d", len(graph_level_pairs))
log.debug("Length of output_df: %d", len(output_df))

# 确保数据长度匹配
if len(graph_level_pairs) != len(output_df):
    raise ValueError("Data length mismatch: {} != {}".format(len(graph_level_pairs), len(output_df)))

# 分割(level, graph)对到单独的列
output_df[[level_to, to]] = pd.DataFrame(graph_level_pairs, index=output_df.index)

# 清理社区映射
output_df.drop(columns=[community_map_to], inplace=True)

return TableContainer(table=output_df)

def apply_clustering( graphml: str, communities: Communities, level=0, seed=0xF001 ) -> nx.Graph: random = Random(seed) # noqa S311 graph = nx.parse_graphml(graphml) for community_level, community_id, nodes in communities: if level == community_level: for node in nodes: graph.nodes[node]["cluster"] = community_id graph.nodes[node]["level"] = level

for node_degree in graph.degree:
    graph.nodes[str(node_degree[0])]["degree"] = int(node_degree[1])

for index, node in enumerate(graph.nodes()):
    graph.nodes[node]["human_readable_id"] = index
    graph.nodes[node]["id"] = str(gen_uuid(random))

for index, edge in enumerate(graph.edges()):
    graph.edges[edge]["id"] = str(gen_uuid(random))
    graph.edges[edge]["human_readable_id"] = index
    graph.edges[edge]["level"] = level
return graph

class GraphCommunityStrategyType(str, Enum): leiden = "leiden"

def __repr__(self):
    return f'"{self.value}"'

def run_layout( strategy: dict[str, Any], graphml_or_graph: str | nx.Graph ) -> Communities: graph = load_graph(graphml_or_graph) if len(graph.nodes) == 0: log.warning("Graph has no nodes") return []

clusters: dict[int, dict[str, list[str]]] = {}
strategy_type = strategy.get("type", GraphCommunityStrategyType.leiden)
match strategy_type:
    case GraphCommunityStrategyType.leiden:
        from .strategies.leiden import run as run_leiden

        clusters = run_leiden(graph, strategy)
    case _:
        msg = f"Unknown clustering strategy {strategy_type}"
        raise ValueError(msg)

results: Communities = []
for level in clusters:
    for cluster_id, nodes in clusters[level].items():
        if not isinstance(level, int) or not isinstance(cluster_id, str) or not isinstance(nodes, list):
            raise ValueError("Invalid data format in clustering results")
        results.append((level, cluster_id, nodes))
return results

aiChatGPT35User123 avatar Jul 29 '24 01:07 aiChatGPT35User123

same, {"type": "error", "data": "Error running pipeline!", "stack": "Traceback (most recent call last):\n File "D:\AppGallery\miniconda\lib\site-packages\graphrag\index\run.py", line 323, in run_pipeline\n result = await workflow.run(context, callbacks)\n File "D:\AppGallery\miniconda\lib\site-packages\datashaper\workflow\workflow.py", line 369, in run\n timing = await self._execute_verb(node, context, callbacks)\n File "D:\AppGallery\miniconda\lib\site-packages\datashaper\workflow\workflow.py", line 410, in _execute_verb\n result = node.verb.func(**verb_args)\n File "D:\AppGallery\miniconda\lib\site-packages\graphrag\index\verbs\graph\clustering\cluster_graph.py", line 102, in cluster_graph\n output_df[[level_to, to]] = pd.DataFrame(\n File "D:\AppGallery\miniconda\lib\site-packages\pandas\core\frame.py", line 4299, in setitem\n self._setitem_array(key, value)\n File "D:\AppGallery\miniconda\lib\site-packages\pandas\core\frame.py", line 4341, in _setitem_array\n check_key_length(self.columns, key, value)\n File "D:\AppGallery\miniconda\lib\site-packages\pandas\core\indexers\utils.py", line 390, in check_key_length\n raise ValueError("Columns must be same length as key")\nValueError: Columns must be same length as key\n", "source": "Columns must be same length as key", "details": null}

nValueError: Columns must be same length as key\n", "source": "Columns must be same length as key", "details": null}

lqbin007 avatar Jul 30 '24 05:07 lqbin007

I also encountered the 401 issue. {"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "B:\conda\envs\rag\lib\site-packages\graphrag\llm\base\base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n File "B:\conda\envs\rag\lib\site-packages\graphrag\llm\openai\openai_chat_llm.py", line 53, in _execute_llm\n completion = await self.client.chat.completions.create(\n File "B:\conda\envs\rag\lib\site-packages\openai\resources\chat\completions.py", line 1305, in create\n return await self._post(\n File "B:\conda\envs\rag\lib\site-packages\openai\_base_client.py", line 1815, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n File "B:\conda\envs\rag\lib\site-packages\openai\_base_client.py", line 1509, in request\n return await self._request(\n File "B:\conda\envs\rag\lib\site-packages\openai\_base_client.py", line 1610, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: ollama. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n", "source": "Error code: 401 - {'error': {'message': 'Incorrect API key provided: ollama. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}", "details": {"input": "\n-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n \n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity, capitalized\n- entity_type: One of the following types: [organization,person,geo,event]\n- entity_description: Comprehensive description of the entity's attributes and activities\nFormat each entity as ("entity"<|><entity_name><|><entity_type><|><entity_description>)\n \n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are clearly related to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n Format each relationship as ("relationship"<|><source_entity><|><target_entity><|><relationship_description><|><relationship_strength>)\n \n3. Return output in English as a single list of all the entities and relationships identified in steps 1 and 2. Use ## as the list delimiter.\n \n4. When finished, output <|COMPLETE|>\n \n######################\n-Examples-\n######################\nExample 1:\nEntity_types: ORGANIZATION,PERSON\nText:\nThe Verdantis's Central Institution is scheduled to meet on Monday and Thursday, with the institution planning to release its latest policy decision on Thursday at 1:30 p.m. PDT, followed by a press conference where Central Institution Chair Martin Smith will take questions. Investors expect the Market Strategy Committee to hold its benchmark interest rate steady in a range of 3.5%-3.75%.\n######################\nOutput:\n("entity"<|>CENTRAL INSTITUTION<|>ORGANIZATION<|>The Central Institution is the Federal Reserve of Verdantis, which is setting interest rates on Monday and Thursday)\n##\n("entity"<|>MARTIN SMITH<|>PERSON<|>Martin Smith is the chair of the Central Institution)\n##\n("entity"<|>MARKET STRATEGY COMMITTEE<|>ORGANIZATION<|>The Central Institution committee makes key decisions about interest rates and the growth of Verdantis's money supply)\n##\n("relationship"<|>MARTIN SMITH<|>CENTRAL INSTITUTION<|>Martin Smith is the Chair of the Central Institution and will answer questions at a press

My configuration file is as follows:

encoding_model: cl100k_base skip_workflows: [] llm: api_key: ollama type: openai_chat # or azure_openai_chat model: llama3.1 model_supports_json: true # recommended if this is available for your model.

max_tokens: 4000

request_timeout: 180.0

api_base: http://localhost:11434/v1

api_version: 2024-02-15-preview

organization: <organization_id>

deployment_name: <azure_model_deployment_name>

tokens_per_minute: 150_000 # set a leaky bucket throttle

requests_per_minute: 10_000 # set a leaky bucket throttle

max_retries: 10

max_retry_wait: 10.0

sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times

concurrent_requests: 25 # the number of parallel inflight requests that may be made

temperature: 0 # temperature for sampling

top_p: 1 # top-p sampling

n: 1 # Number of completions to generate

parallelization: stagger: 0.3

num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:

parallelization: override the global parallelization settings for embeddings

async_mode: threaded # or asyncio llm: api_key: ollama type: openai_embedding # or azure_openai_embedding model: nomic-embed-text # api_base: http://localhost:11434/api # api_version: 2024-02-15-preview # organization: <organization_id> # deployment_name: <azure_model_deployment_name> # tokens_per_minute: 150_000 # set a leaky bucket throttle # requests_per_minute: 10_000 # set a leaky bucket throttle # max_retries: 10 # max_retry_wait: 10.0 # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times # concurrent_requests: 25 # the number of parallel inflight requests that may be made # batch_size: 16 # the number of documents to send in a single request # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request # target: required # or optional

chenfenzixian avatar Aug 08 '24 14:08 chenfenzixian

Thank you for your reply. After my check the .txt file encoding format is UTF-8, and after running it again the problem is not solved. But thanks for sharing your experience.

bushnerd @.***> 于2024年8月11日周日 17:22写道:

I also encountered this problem. Eventually, I found out that it was because the encoding format of the.txt file I placed in the Input folder was not UTF-8. After changing the encoding format, there was no such problem anymore.

— Reply to this email directly, view it on GitHub https://github.com/microsoft/graphrag/issues/369#issuecomment-2282691493, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJQ4AYQ3DNE65M57FWUXBU3ZQ4UNTAVCNFSM6AAAAABKMKSLQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBSGY4TCNBZGM . You are receiving this because you commented.Message ID: @.***>

chenfenzixian avatar Aug 12 '24 06:08 chenfenzixian

https://github.com/microsoft/graphrag/issues/369#issuecomment-2226840030

@zhoufan1978 May I ask if your issue has been resolved?

tinaYA524 avatar Oct 10 '24 02:10 tinaYA524

No, it has not.

tinaYA524 @.***> 于2024年10月10日周四 10:30写道:

#369 (comment) https://github.com/microsoft/graphrag/issues/369#issuecomment-2226840030

@zhoufan1978 https://github.com/zhoufan1978 May I ask if your issue has been resolved?

— Reply to this email directly, view it on GitHub https://github.com/microsoft/graphrag/issues/369#issuecomment-2403806309, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJQ4AYWOEC5XHPB2JYHYOCDZ2XRD7AVCNFSM6AAAAABKMKSLQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBTHAYDMMZQHE . You are receiving this because you commented.Message ID: @.***>

chenfenzixian avatar Oct 11 '24 01:10 chenfenzixian

Hello, how to fix it? ❌ extract_graph None ⠹ GraphRAG Indexer ├── Loading Input (InputFileType.text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_final_documents ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ❌ Errors occurred during the pipeline run, see logs for more details. 20:50:49,158 graphrag.index.run.run_workflows ERROR error running workflow extract_graph Traceback (most recent call last): File "D:\LLM\graph_RAG\graphrag\graphrag\index\run\run_workflows.py", line 169, in _run_workflows result = await run_workflow( ^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\workflows\extract_graph.py", line 45, in run_workflow base_entity_nodes, base_relationship_edges = await extract_graph( ^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\flows\extract_graph.py", line 33, in extract_graph entities, relationships = await extract_entities( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 136, in extract_entities entities = _merge_entities(entity_dfs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 169, in _merge_entities all_entities.groupby(["title", "type"], sort=False) File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\frame.py", line 9183, in groupby return DataFrameGroupBy( ^^^^^^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\groupby.py", line 1329, in init grouper, exclusions, obj = get_grouper( ^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\grouper.py", line 1043, in get_grouper raise KeyError(gpr) KeyError: 'title' 20:50:49,169 graphrag.callbacks.file_workflow_callbacks INFO Error running pipeline! details=None 20:50:49,184 graphrag.cli.index ERROR Errors occurred during the pipeline run, see logs for more details.

huangjw2001 avatar Jan 10 '25 12:01 huangjw2001

Hello, how to fix it? ❌ extract_graph None ⠹ GraphRAG Indexer ├── Loading Input (InputFileType.text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_final_documents ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ❌ Errors occurred during the pipeline run, see logs for more details. 20:50:49,158 graphrag.index.run.run_workflows ERROR error running workflow extract_graph Traceback (most recent call last): File "D:\LLM\graph_RAG\graphrag\graphrag\index\run\run_workflows.py", line 169, in _run_workflows result = await run_workflow( ^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\workflows\extract_graph.py", line 45, in run_workflow base_entity_nodes, base_relationship_edges = await extract_graph( ^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\flows\extract_graph.py", line 33, in extract_graph entities, relationships = await extract_entities( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 136, in extract_entities entities = _merge_entities(entity_dfs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 169, in _merge_entities all_entities.groupby(["title", "type"], sort=False) File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\frame.py", line 9183, in groupby return DataFrameGroupBy( ^^^^^^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\groupby.py", line 1329, in init grouper, exclusions, obj = get_grouper( ^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\grouper.py", line 1043, in get_grouper raise KeyError(gpr) KeyError: 'title' 20:50:49,169 graphrag.callbacks.file_workflow_callbacks INFO Error running pipeline! details=None 20:50:49,184 graphrag.cli.index ERROR Errors occurred during the pipeline run, see logs for more details.

@huangjw2001 Have you resolved it?

KDD2018 avatar Jan 20 '25 10:01 KDD2018

Hello, how to fix it? ❌ extract_graph None ⠹ GraphRAG Indexer ├── Loading Input (InputFileType.text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_final_documents ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ❌ Errors occurred during the pipeline run, see logs for more details. 20:50:49,158 graphrag.index.run.run_workflows ERROR error running workflow extract_graph Traceback (most recent call last): File "D:\LLM\graph_RAG\graphrag\graphrag\index\run\run_workflows.py", line 169, in _run_workflows result = await run_workflow( ^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\workflows\extract_graph.py", line 45, in run_workflow base_entity_nodes, base_relationship_edges = await extract_graph( ^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\flows\extract_graph.py", line 33, in extract_graph entities, relationships = await extract_entities( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 136, in extract_entities entities = _merge_entities(entity_dfs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 169, in _merge_entities all_entities.groupby(["title", "type"], sort=False) File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\frame.py", line 9183, in groupby return DataFrameGroupBy( ^^^^^^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\groupby.py", line 1329, in init grouper, exclusions, obj = get_grouper( ^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\grouper.py", line 1043, in get_grouper raise KeyError(gpr) KeyError: 'title' 20:50:49,169 graphrag.callbacks.file_workflow_callbacks INFO Error running pipeline! details=None 20:50:49,184 graphrag.cli.index ERROR Errors occurred during the pipeline run, see logs for more details.

@huangjw2001 Have you resolved it?

I have the same error. Have you resolved it? Thiserror has been bothering me for days.

zeus-y avatar Jan 21 '25 03:01 zeus-y

Hello, how to fix it? ❌ extract_graph None ⠹ GraphRAG Indexer ├── Loading Input (InputFileType.text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_final_documents ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ❌ Errors occurred during the pipeline run, see logs for more details. 20:50:49,158 graphrag.index.run.run_workflows ERROR error running workflow extract_graph Traceback (most recent call last): File "D:\LLM\graph_RAG\graphrag\graphrag\index\run\run_workflows.py", line 169, in _run_workflows result = await run_workflow( ^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\workflows\extract_graph.py", line 45, in run_workflow base_entity_nodes, base_relationship_edges = await extract_graph( ^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\flows\extract_graph.py", line 33, in extract_graph entities, relationships = await extract_entities( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 136, in extract_entities entities = _merge_entities(entity_dfs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 169, in _merge_entities all_entities.groupby(["title", "type"], sort=False) File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\frame.py", line 9183, in groupby return DataFrameGroupBy( ^^^^^^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\groupby.py", line 1329, in init grouper, exclusions, obj = get_grouper( ^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\grouper.py", line 1043, in get_grouper raise KeyError(gpr) KeyError: 'title' 20:50:49,169 graphrag.callbacks.file_workflow_callbacks INFO Error running pipeline! details=None 20:50:49,184 graphrag.cli.index ERROR Errors occurred during the pipeline run, see logs for more details.

@huangjw2001 Have you resolved it?

I have the same error. Have you resolved it? Thiserror has been bothering me for days.

No, it work when I use ollama model sever, but when I use SGLang model server, GraphRAG throw the error.

KDD2018 avatar Jan 21 '25 05:01 KDD2018

Hello, how to fix it? ❌ extract_graph None ⠹ GraphRAG Indexer ├── Loading Input (InputFileType.text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_final_documents ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ❌ Errors occurred during the pipeline run, see logs for more details. 20:50:49,158 graphrag.index.run.run_workflows ERROR error running workflow extract_graph Traceback (most recent call last): File "D:\LLM\graph_RAG\graphrag\graphrag\index\run\run_workflows.py", line 169, in _run_workflows result = await run_workflow( ^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\workflows\extract_graph.py", line 45, in run_workflow base_entity_nodes, base_relationship_edges = await extract_graph( ^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\flows\extract_graph.py", line 33, in extract_graph entities, relationships = await extract_entities( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 136, in extract_entities entities = _merge_entities(entity_dfs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\graph_RAG\graphrag\graphrag\index\operations\extract_entities\extract_entities.py", line 169, in _merge_entities all_entities.groupby(["title", "type"], sort=False) File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\frame.py", line 9183, in groupby return DataFrameGroupBy( ^^^^^^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\groupby.py", line 1329, in init grouper, exclusions, obj = get_grouper( ^^^^^^^^^^^^ File "C:\Users\huangjw\anaconda3\envs\GraphRAG\Lib\site-packages\pandas\core\groupby\grouper.py", line 1043, in get_grouper raise KeyError(gpr) KeyError: 'title' 20:50:49,169 graphrag.callbacks.file_workflow_callbacks INFO Error running pipeline! details=None 20:50:49,184 graphrag.cli.index ERROR Errors occurred during the pipeline run, see logs for more details.

@huangjw2001 Have you resolved it?

I have the same error. Have you resolved it? Thiserror has been bothering me for days.

No, it work when I use ollama model sever, but when I use SGLang model server, GraphRAG throw the error.

@KDD2018 Hi have u resolve the problem? i have the same

Mybigwang avatar Feb 12 '25 15:02 Mybigwang

Hi have u resolve the problem? i have the same

LG-lub avatar Feb 14 '25 02:02 LG-lub