LightRAG icon indicating copy to clipboard operation
LightRAG copied to clipboard

[Bug]: <title> In the case, knowledge_graph.html cannot be displayed well

Open Slian22 opened this issue 9 months ago • 4 comments

Do you need to file an issue?

  • [x] I have searched the existing issues and this bug is not already filed.
  • [x] I believe this is a legitimate bug, not just a question or feature request.

Describe the bug

I deployed lightrag according to the official steps, but my python3.10 and ollama local large model generation case will result in an incomplete knowledge graph. What should I do?

Image My operating system is Mac OS, and the local large models are gemma2:2b and nomic-embed-text:latest

Steps to reproduce

Follow the official video

Expected Behavior

No response

LightRAG Config Used

Paste your config here

Logs and screenshots

No response

Additional Information

  • LightRAG Version:
  • Operating System:
  • Python Version:
  • Related Issues:

Slian22 avatar Mar 15 '25 17:03 Slian22

You could try using bigger model, because lightRAG knowledge graph generation heavily relied on the LLM capability.

Ja1aia avatar Mar 17 '25 03:03 Ja1aia

had a similar issue. was able to reduce 'orphan' entities be using a better LLM and also reducing chunk size to a value between 200 to 400 and overlap to a value between 20-100.

BireleyX avatar Mar 18 '25 13:03 BireleyX

Ok, I'll try it. Thanks.

Slian22 avatar Mar 21 '25 12:03 Slian22

I am facing similar issue. Which better LLM are you referring to ? @BireleyX . I have this set up in Mac Mini M4, 16GB RAM - 2024 Model. I am also using local large models are gemma2:2b and nomic-embed-text:latest

pateldeepp avatar Mar 22 '25 17:03 pateldeepp

what i meant by better LLM was change from gpt-4o-mini to gpt-4o. when possible, use the model variant with the most parameters your hardware (in case of local AI) can support.

BireleyX avatar May 02 '25 15:05 BireleyX