ChatPDF
ChatPDF copied to clipboard
This model's maximum context length is 4097 tokens, however you requested 4236 tokens
Thanks for you contribution and work! Sometimes I got error like this: openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4236 tokens (3980 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
How can I solve this problem?
This project ignores the fact that PDFs could be quite large and not fit into a single GPT query. What people usually do to workaround this is to make summary of summary of summary recursively, in order to squeeze the original PDF content.
This means that this ChatPDF is quite different from the known app. In the original app, they perform semantic indexing and extract relevant paragraphs to feed them to ChatGPT. Which is way more efficient than sending the whole PDF to ChatGPT.
Precisely. But let's appreciate the author's effort to demonstrate the feasibility of GPT by this project. Ultimately, this is a great contribution to rethinking and enriching the services we all use in the daily life.
You can try 16K model, but it will increase the cost. Here is the price list: https://openai.com/pricing