kernel-memory icon indicating copy to clipboard operation
kernel-memory copied to clipboard

[Bug] Setting OverlappingTokens (via appsettings) reduces the configured MaxTokensPerParagraph

Open SeanHusmann opened this issue 1 year ago • 1 comments

Context / Scenario

Setting different values for MaxTokensPerParagraph and OverlappingTokens to test out the optimal chunking strategy for answering a set of test questions on a document.

What happened?

When I leave MaxTokensPerParagraph as is (1000) and incrementally increase the value of only OverlappingTokens (via appsettings) between test sessions by increments of +100 per test, the resulting chunk/paragraph size keeps decreasing, the chunks keep becoming smaller.

I finally ended up with settings of MaxTokensPerParagraph: 1000 and OverlappingTokens: 800 resulting in paragraph/chunk sizes that were only around 200 tokens large as counted by the Open AI Tokenizer.


What I expected to happen was either the resulting chunk size to be:

  • MaxTokensPerParagraph + OverlappingTokens
  • MaxTokensPerParagraph (where the OverlappingTokens are included in the MaxTokensPerParagraph)

I tested with a single 46 pages document, which I re-ingested with the exact same call to ImportDocumentAsync() between each test session (upserting/replacing (?) previous chunks for the same document id), leaving MaxTokensPerParagraph as is, but increasing OverlappingTokens between each test, saving appsettings, restarting the service, re-ingesting the same document.


Right now the issue can be circumvented by simultaneously increasing both MaxTokensPerParagraph and OverlappingTokens by the same amount if you want the resulting chunk size to be roughly equivalent to the specified MaxTokensPerParagraph.

Importance

a fix would make my life easier

Platform, Language, Versions

Windows 10, C#, Kernel Memory 0.27.240205.2

Relevant log output

No response

SeanHusmann avatar Feb 21 '24 16:02 SeanHusmann

Spent some time inspecting the chunker in KM, which we took from SK. The initial version I wrote in 2022 didn't have the overlapping tokens logic, which was introduced later with this PR https://github.com/microsoft/semantic-kernel/pull/1206 by @MonsterCoder . Later the code got considerable changes in

  • https://github.com/microsoft/semantic-kernel/pull/1709
  • https://github.com/microsoft/semantic-kernel/pull/2574
  • https://github.com/microsoft/semantic-kernel/pull/3374

If someone wants to check all the changes...

I'm considering a complete rewrite, taking the opportunity to simplify the public methods and clarify the behavior:

  • Remove the split by sentence step, I don't think this is needed externally, it's just an internal implementation detail
  • Remove the "header" feature, it's not needed in KM and not the best approach to tag content
  • When choosing to overlap, define if the overlap is an exact number of tokens that can break sentences, or a best effort keeping sentences intact. E.g. one might choose 10 tokens overlap using sentences, in which case the exact number depends on the length of the sentences to carry over from a previous partition.

dluc avatar Mar 13 '24 20:03 dluc

New chunkers just merged. Note: OverlappingTokens is part of MaxTokensPerParagraph.

For instance, if MaxTokensPerParagraph = 1000 and OverlappingTokens = 300, a chunk will contain 300 tokens from the previous chunk and 700 new tokens:

Chunk 1: 1 ... 1000 Chunk 2: 701 ... 1700 Chunk 3: 1401 ... 2400 etc.

dluc avatar Feb 06 '25 23:02 dluc