llm
llm copied to clipboard
Error: This model's maximum context length is 4097 tokens. However, your messages resulted in 8285 tokens. Please reduce the length of the messages.
Hi, I'm seeing the following error when using the default model, 3.5:
Error: This model's maximum context length is 4097 tokens. However, your messages resulted in 8285 tokens. Please reduce the length of the messages.
Does the llm
function default to using the new chatgpt-3.5-turbo-1106
model? I've noticed that in the models
section of the configuration file, there are separate entries for gpt-4
and gpt-4-1106-preview
, but there is only one entry for gpt-3.5-turbo
. Does this mean I need to add the 1106
model to the openai
models YAML file?
Thanks.