evals
evals copied to clipboard
Context window of completion functions not accounted for
Describe the bug
It seems that some evals require specific context window length, ex: make-me-say eval probably requires 32k?
It would be nice if there was a more DX friendly to know about this before it errors in the API call?
To Reproduce
oaieval gpt-3.5-turbo,gpt-3.5-turbo,gpt-3.5-turbo make-me-say --debug
This model's maximum context length is 4097 tokens. However, your messages resulted in 4123 tokens. Please reduce the length of the messages.
Code snippets
No response
OS
macOS
Python version
Python v3.9.7
Library version
openai-evals 1.0.3