BireleyX
BireleyX
same for me, encountered the same error when converting one of my docx files. document uses the same template as other docx that can be converted and saved as markdown...
@danielaskdd Please also include this issue in relation to supporting 'reasoning models' like the o3. I'm using the lightrag-server setup: File "C:\Apps\dVALi\.venv\Lib\site-packages\openai\_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None...
**I'm using Azure OpenAI o3-mini.** I was able to add support for this by modifying these files: lightrag_server.py ``` async def azure_openai_model_complete( prompt, system_prompt=None, history_messages=None, keyword_extraction=False, **kwargs, ) -> str:...
The screenshots were taken one for each of the modes. From top to bottom: global, hybrid, local, mix, naive
## This is taken from the WebUI document management tab. I think this may be related. The error message started appearing after updating lightrag to 1.2.6:  ## And here...
had a similar issue. was able to reduce 'orphan' entities be using a better LLM and also reducing chunk size to a value between 200 to 400 and overlap to...
what i meant by better LLM was change from gpt-4o-mini to gpt-4o. when possible, use the model variant with the most parameters your hardware (in case of local AI) can...
This is a very good feature request! was deciding to use docling service on a separate server in conjuction with Lightrag.
I don't think converting the existing API to MCP server is necessary or even the correct approach. MCP servers are API wrappers... to make the interaction with AI consistent. I...
@tilleul same here. couldn't successfully install megaparse onto my windows project because of uvloop. i'm also new to python, not familiar with setting up docker (if this was a solution...