Ted Sanders
Ted Sanders
> Hi, it looks good to me in general. One corner case I'm thinking is that the model might output the next most probable token not in ['yes','no'] (I expect...
Looks great. Only two final recommendations: - Use a big logit_bias (e.g., 100 instead of 1; 1 is small enough that a lot of other tokens will still come in,...
Thanks for the contribution, and for your patience. I'll review this week. Any feedback you're looking for, in particular?
I will try to review this week. Thanks for the detailed and high-quality contribution!
By the way, really appreciate you taking the time to describe and document your improvements. Always love to see it. :)
I see a match when I run the notebook code: What do you think explains the difference between your results and mine?
One piece of good news: `max_tokens` is optional for ChatCompletion requests. https://platform.openai.com/docs/api-reference/chat/create
No offense taken. We'll update our version at some point, but no one on our team has prioritized it. In the meantime, an appreciative developer suggested we add a link...