graphrag
graphrag copied to clipboard
RuntimeError: Failed to generate valid JSON output
Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
- [ ] I believe this is a legitimate bug, not just a question. If this is a question, please use the Discussions area.
Describe the bug
Why can it be okay by using function substitution?
# is_response_valid = kwargs.get("is_response_valid") or (lambda _x: True)
def is_response_valid(x, **kwargs):
is_response_valid = kwargs.get("is_response_valid") or (lambda _x: True)
if is_response_valid(x):
return True
else:
return False
Steps to reproduce
No response
Expected Behavior
No response
GraphRAG Config Used
# Paste your config here
Logs and screenshots
16:42:35,41 graphrag.index.graph.extractors.community_reports.community_reports_extractor ERROR error generating community report
Traceback (most recent call last):
File "/home/chorustek/work/graphrag/graphrag/graphrag/index/graph/extractors/community_reports/community_reports_extractor.py", line 58, in __call__
await self._llm(
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/openai/json_parsing_llm.py", line 34, in __call__
result = await self._delegate(input, **kwargs)
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/openai/openai_token_replacing_llm.py", line 37, in __call__
return await self._delegate(input, **kwargs)
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/openai/openai_history_tracking_llm.py", line 33, in __call__
output = await self._delegate(input, **kwargs)
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/base/caching_llm.py", line 96, in __call__
result = await self._delegate(input, **kwargs)
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/base/rate_limiting_llm.py", line 177, in __call__
result, start = await execute_with_retry()
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/base/rate_limiting_llm.py", line 159, in execute_with_retry
async for attempt in retryer:
File "/home/chorustek/miniconda3/envs/graphrag/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 166, in __anext__
do = await self.iter(retry_state=self._retry_state)
File "/home/chorustek/miniconda3/envs/graphrag/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 153, in iter
result = await action(retry_state)
File "/home/chorustek/miniconda3/envs/graphrag/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner
return call(*args, **kwargs)
File "/home/chorustek/miniconda3/envs/graphrag/lib/python3.10/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
File "/home/chorustek/miniconda3/envs/graphrag/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/home/chorustek/miniconda3/envs/graphrag/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/base/rate_limiting_llm.py", line 165, in execute_with_retry
return await do_attempt(), start
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/base/rate_limiting_llm.py", line 147, in do_attempt
return await self._delegate(input, **kwargs)
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/base/base_llm.py", line 48, in __call__
return await self._invoke_json(input, **kwargs)
File "/home/chorustek/work/graphrag/graphrag/graphrag/llm/openai/openai_chat_llm.py", line 94, in _invoke_json
raise RuntimeError(FAILED_TO_CREATE_JSON_ERROR)
RuntimeError: Failed to generate valid JSON output
Additional Information
- GraphRAG Version:
- Operating System:
- Python Version:
- Related Issues: