Harith Zulfaizal
Harith Zulfaizal
You can try setting `reduce_k_below_max_tokens=True`, it is supposed to limit the number of results to return from store based on tokens limit. But for some reason, it seems that it...
> I believe this should fix it. #1444 This works, thank you!
I tried using `reduce_k_below_max_tokens` but it doesn't seem to work. I still get the "InvalidRequestError: This model's maximum context length is 4096 tokens" error. Code snippet: ``` qa_chain = load_qa_with_sources_chain(OpenAIChat(temperature=0),...