Azure OpenAI
Search before asking
- [X] I had searched in the issues and found no similar feature requirement.
Description
How to use Azure OpenAI? Thanks!
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Search before asking
- [x] I had searched in the issues and found no similar feature requirement.
Description
How to use Azure OpenAI? Thanks!
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Duplicate issue,please refer to Are there any models other than deepseek-chat supported?
<class 'RuntimeError'>: invalid llm config: {'api_key': 'ce3de18xxxxxxxxxxx', 'base_url': 'https://xxxxxxxx.openai.azure.com', 'model': 'gpt-4o', 'client_type': 'maas'},
for details: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
The config for Azure OpenAI is different.
<class 'RuntimeError'>: invalid llm config: {'api_key': 'ce3de18xxxxxxxxxxx', 'base_url': 'https://xxxxxxxx.openai.azure.com', 'model': 'gpt-4o', 'client_type': 'maas'},
for details: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
The config for Azure OpenAI is different.
You can test your model service accessibility throuth model Service Availability Test
But postfix of chat/completions should be removed when you type in the llm configuration.
<class 'RuntimeError'>: invalid llm config: {'api_key': 'ce3de18xxxxxxxxxxx', 'base_url': 'https://xxxxxxxx.openai.azure.com', 'model': 'gpt-4o', 'client_type': 'maas'},
for details: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
The config for Azure OpenAI is different.
I have the same problem. In order to use azureopenai, it needs to support additional parameters, such as AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_DEPLOYNAME.
for more detail in
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
Does this mean that we need to write the linking method for this model separately?
Does this mean that we need to write the linking method for this model separately?
It looks like Azure OpenAI is not compatible with openai interface, and KAG only support those model services which are compatible to openai.
you can customize LLMClient implementation of KAG in developer mode, or chose another openai compatible model service.
Hello,
I tried to set up the example_config.yaml to use my Azure OpenAI API credentials but seem to be running into an issue:
(kag-demo) (base) user examples % knext project create --config_path ./example_config.yaml Done initialize project config with host addr http://127.0.0.1:8887 and project_id 1 No config found. Error: call AzureOpenAIClient.init failed, details:AzureOpenAI.init() got an unexpected keyword argument 'model'
Here's my edited config file:
#------------project configuration start----------------# openie_llm: &openie_llm api_key: "..." base_url: "https://....openai.azure.com/" model: "gpt-4o-mini" api_version: "2024-06-01" type: azure_openai
chat_llm: &chat_llm api_key: "..." base_url: "https://....openai.azure.com/" model: "gpt-4o-mini" api_version: "2024-06-01" type: azure_openai
vectorize_model: &vectorize_model api_key: "..." base_url: "https://.....openai.azure.com/" azure_deployment: "text-embedding-3-large" type: azure_openai vector_dimensions: 1536 vectorizer = *vectorize_model
log: level: INFO
project: biz_scene: default host_addr: http://127.0.0.1:8887 id: "1" language: en namespace: TwoWikiTest #------------project configuration end----------------#
I was able to successfully connect and get a response from the Azure OpenAI API, I did this by tinkering with the openai_client.py file and changing self.client to
self.client = AzureOpenAI(
api_key=api_key,
azure_endpoint=base_url,
azure_deployment=azure_deployment,
api_version=api_version,
)