Switched to AzureOpenAI for api_type=="azure"
With version 1 of openai client library there's a dedicated class AzureOpenAI to handle Azure endpoints. While the current version had a cryptic _process_for_azure workaround likely coming from pre v1 times it makes sense to remove the old code and hand over the responsibility of interacting with Azure straight to openai library/
Checks
- [N/A] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
- [N/A] I've added tests (if relevant) corresponding to the changes introduced in this PR.
- [?] I've made sure all auto checks have passed. ! I still don't have all the endpoints to completely run OpenAI tests, yet those few that ran before the change kept running after it.
Codecov Report
Attention: 10 lines in your changes are missing coverage. Please review.
Comparison is base (
acf81ac) 31.95% compared to head (d5caae1) 41.81%. Report is 1 commits behind head on main.
| Files | Patch % | Lines |
|---|---|---|
| autogen/oai/client.py | 41.17% | 9 Missing and 1 partial :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #1232 +/- ##
==========================================
+ Coverage 31.95% 41.81% +9.85%
==========================================
Files 33 33
Lines 4431 4415 -16
Branches 1035 1085 +50
==========================================
+ Hits 1416 1846 +430
+ Misses 2897 2421 -476
- Partials 118 148 +30
| Flag | Coverage Δ | |
|---|---|---|
| unittests | 41.74% <41.17%> (+9.83%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Doing this PR right requires some deep understanding of the code. #868 tried it but also didn't succeed. If my comment is hard to understand, please let me know and I can do the PR and make you a reviewer.
Will take a closer look tomorrow, thanks for the comments!
@sonichi , please check. I've uploaded changes, all client tests pass except of one which is on the model side and not related to client code (gpt-35-turbo keeps returning one tool instead of 2, the prompt asks for weather in 2 locations).
This looks good, but I am concerned about testing. Maybe we should add smoke tests for OpenAI and multiple Azure endpoints (one for each API version).
Makes sense, though I think smoke test can do one prompt with 2 models - openai and azure (using any API version) - will add it tomorrow. What are the models in GH actions runner that are 100% OpenAI and Azure which I can use in the smoke test?
Testing all Azure api versions would be infra config nightmare and better be done separately
Testing all Azure api versions would be infra confignightmare and better be done separately
That should be a separate issue/pull request. Right now we are not sure which features work on which version of Azure API. From the infra perspective, it is not as complicated. We do need to have a matrix which versions of API support which models.
Testing all Azure api versions would be infra confignightmare and better be done separately
That should be a separate issue/pull request. Right now we are not sure which features work on which version of Azure API. From the infra perspective, it is not as complicated. We do need to have a matrix which versions of API support which models.
It took me few days to find gpt-3.5-instruct in Azure and get those tests that depended on it (in parallel changing the tests to support dotless names of Azure models), to get those test that depended on the instrcut model passing... And it took me some time to find the docs which listed availability of diffetent models in fifferent regions only to discover that in the region listed as valid I didn't have the instruct model :)
So yeah, mixing and matching different versions of models, API versions, Azure regions and debuging 401 errors is something I can see quite clearly in the future... Glad I am not the one responsible for the infra/CI ;)
Anyways, do I need to add any a new test to validate 2 configs (which are supposed to be 100% Azure and OpenAI in GH Actions)?
Anyways, do I need to add any a new test to validate 2 configs (which are supposed to be 100% Azure and OpenAI in GH Actions)?
It would be great if you could add a smoke test for both the OpenAI and the default Azure APIs.
I made a branch that explains what I meant. Could you give me access to push in your fork?
I made a branch that explains what I meant. Could you give me access to push in your fork?
Done!
Could you fix the conflict? We are close to ready.
@sonichi , I had to return the model names workaround to fix tests using get-3.5-instruct, without the fix the tests are not runnable with azure models
@sonichi , I had to return the model names workaround to fix tests using get-3.5-instruct, without the fix the tests are not runnable with azure models
![]()
Pushed a change that should fix this.
Pushed a change that should fix this.
Looks good, tests pass