Aavas13
Results
1
comments of
Aavas13
this can be fixed by chunking . here is an example code : def chunk_text(text: str, chunk_size: int = 4096) -> list[str]: tokens = openai.Tokenizer.tokenize(text) chunks = [] current_chunk =...