[Question]: Knowledge Graph mode the answer is cut
Describe your problem
I upload two files as test files using Knowledge Graph as parsing method. The chunk token number is 1024. I also follow #1239 to disable max_token.
The LLM answer is still been cut very offen.
The context is out of length. Adjust these 2 parameters. Or, cut down chunk token number.
The context is out of length. Adjust these 2 parameters. Or, cut down chunk token number.
I follow your suggestion, it doesn't improve. I set the chunk token size to 256, and reparse the document. The chunk number increases.
I use knowledge dataset query test function to search chunks and find that some of the chunks are very long, much longer than chunk size.
@KevinHuSh Could you explain to me why the chunk size would become so big?
This is generated by LLM. You're using knowledge graph as parsing method.
