langchain
langchain copied to clipboard
Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13
Issue you'd like to raise.
The blog post here https://openai.com/blog/function-calling-and-other-api-updates
specifies
- new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The langchain/llms/openai.py
model_token_mapping
should be changed to reflect this.
Suggestion:
Add gpt-3.5-turbo-16k
property to model_token_mapping
with value 16k
Yes, this is a very relevant pull request for all the people trying to update their code in a plug-and-play fashion
My attempt to add 16k models with model_token_mapping did not affect the token_max value when calling the map_reduce chain
The combine_docs
function on the MapReduceDocumentsChain
has the following header:
def combine_docs(
self,
docs: List[Document],
token_max: int = 3000,
callbacks: Callbacks = None,
**kwargs: Any,
) -> Tuple[str, dict]:
The token_max parameter has a default value, which it looks like you're supposed to change depending on your model. Lmk if this helps!
You were right,I am now fixing the problem temporarily by changing the value of token_max,But token_max need to change the value depending on the model