onyx
onyx copied to clipboard
Integration Support for GPT-4 Turbo and Updated GPT 3.5 Turbo Models
I'd like to request the integration of the new GPT-4 Turbo (gpt-4-1106-preview
) and Updated GPT 3.5 Turbo (gpt-3.5-turbo-1106
) models as announced in OpenAI's recent blog post.
The current documentation for litellm
mentions support for only gpt-4-1106-preview
. It would be beneficial for us to support both models mentioned above.
Please note, this might require an update of the openai
module to incorporate bug fixes from this PR. However, the latest litellm
package version 0.13.2 is compatible with up to openai
package version 0.28.1, which should be taken into account to ensure compatibility is maintained.
Good suggestion, will look into it and also ask the LiteLLM folks
In Azure, gpt-4-1106-preview IS gpt-4 turbo. I'm using it with danswer without issues since a few weeks ago:
https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-openai-service-launches-gpt-4-turbo-and-gpt-3-5-turbo-1106/ba-p/3985962
Is curious as Microsoft announced 120k token window in ignite but I have only 80k available. Maybe they make a mistake with the window of 3.5-turbo, which is 120k
Yup, gpt-4-turbo has been supported for quite some time now, going to close this one now!