Add feature: Support for multiple models on Azure and Azure model mapper
Added support for multiple models on Azure; in practice, we will deploy several models on Azure, such as gpt-4, gpt-3.5, etc.
Two changes:
- Added a new configuration item: AZURE_OPENAI_MODEL_MAPPER
- Extended the configuration item: AZURE_URL
The changes are as follows:
AZURE_URL (optional)
Example: https://{azure-resource-url}/openai/deployments/{deploy-name}
Example: https://xxx.openai.azure.com/openai/deployments/{deploy-name}
Azure deploy url.
If {deploy-name} is using the template mode, then it will automatically replace the path based on the model selected by the client.
If your model name is different from the deployment name, then you need to set the AZURE_OPENAI_MODEL_MAPPER parameter.
AZURE_OPENAI_MODEL_MAPPER (optional)
Default: Empty Example:
gpt-3.5-turbo=gpt-35-turbomeans mapgpt-3.5-turbotogpt-35-turbo
If you are deploying ChatGPT using Azure OpenAI, it is recommended to set the AZURE_OPENAI_MODEL_MAPPER.
The session summarization feature relies on the gpt-3.5-turbo model, unless the name of your Azure deployment is the same as it.
[!WARNING]
TheKnowledgeCutOffDateis intended for system prompts. Exercise caution when changing it, as it must be correctly aligned with the models.
Thank you I have set KnowledgeCutOffDate to 2021-09 @H0llyW00dzZ
Someone is attempting to deploy a commit to the NextChat Team on Vercel.
A member of the Team first needs to authorize it.