azure-search-openai-demo
azure-search-openai-demo copied to clipboard
OpenAI azure
Is it possible that the models work differently than those in open ai?
Can you be more specific by work differently? Azure OpenAI adds a content safety filter on top, so you'll see that in your API responses and may get a content safety error depending on the content (which the app handles with an appropriate message to the user).
I have a context of more than 3k tokens that I pass to the system, this context is from a specific organization which the clients are going to use. When I pass that context to the model deployed in Azure (gpt 3.5) it responds very differently than consuming the OpenAI API with the same model.
Hm, interesting. Generally, there's high variability between one call to a model and another call, so I would recommend testing out different temperature values and setting the "seed" parameter of the Chat completion call, to see if the variability can be explained by other factors. You could also check to see if you are using the same version of the model on Azure OpenAI that you are using on OpenAI.com (e.g. 0125 vs 0613).
Yes it is very interesting, but the strange thing is that I am using exactly the same thing for the call to azure and the call to openAI.
Have you set a seed? We dont set one yet by default in this repo. I'm not sure if the seed would be constant across Azure and OpenAI.com, I'll look into that.