Set OpenAI Model to gpt-4o-mini and Updated Example
Description
The new default model for OpenAI is gpt-4o, but gpt-4o-mini is much cheaper and should be the default. This change only updates the metadata example, and the change from 4o to 4o-mini is applied in https://github.com/dapr/docs/blob/v1.15/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md (currently on 4-turbo)
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
- [X] Code compiles correctly
- [X] Created/updated tests
- [X] Extended the documentation / Created issue in the https://github.com/dapr/docs/ repo: dapr/docs#4608
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!
Perhaps 4.1 nano or mini could also now be considered, as 4o-mini has been retired from OpenAI and will retire in September 2025 in Azure.
@giterinhub you probably want to actually change https://github.com/dapr/components-contrib/blame/main/conversation/openai/openai.go#L44 that is the "real" default, as Metadata.yaml is more like a reference. Also the description in the metadata.yaml would need to change to reflect the default
Hey @giterinhub - mind fixing the conflicts?
Consistently changed all references to OpenAI models to use "gpt-4.1-nano".
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!
This pull request has been automatically closed because it has not had activity in the last 37 days. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!
Hey @giterinhub - I just noticed the daprbot closed this... Mind fixing the conflicts again?
@giterinhub probably this could now be updated to gpt-5 nano or mini https://platform.openai.com/docs/models
@giterinhub - PTAL and update this PR
can you please try running the conformance tests as shown in https://github.com/dapr/components-contrib/blob/fa9ace4ebf8d99d3b124906ca0cf2881b9d72709/tests/config/conversation/README.md.
I checked it out and it shows issues, like all the conversation.Default... do not work as they are defined as lowercase in conversation package. Let me know when it runs ok to test again. Thanks
I'm out of credits for the openai API platform, but the tests seem to work now.
tested locally and it did work for the providers. One thing I noticed that I think i've seen before, on gpt-5 temperature should not be provided or be set to 1, or we get an error like. By default we don't provide it but langchaing go still sends the zero value default and we get:
Error Trace: components-contrib/tests/conformance/conversation/conversation.go:84
Error: Received unexpected error:
API returned unexpected status code: 400: Unsupported value: 'temperature' does not support 0 with this model. Only the default (1) value is supported.
Test: TestConversationConformance/openai.openai/converse/get_a_non-empty_response_without_errors
There is a fix in langchaingo main branch https://github.com/tmc/langchaingo/pull/1374 but it is not in a release tag for now, not sure we want to wait for it or not @sicoyle ??
So it would be good to put a comment about this restriction somewhere, that for now (until we update langchainggo) we should add Temperature: 1 to the request. For example in the conformance test file https://github.com/dapr/components-contrib/blob/8d675f486059f77f21b2f0ad7cd303b914d85cbd/tests/conformance/conversation/conversation.go, every request:
req1 := &conversation.Request{
Message: &messages,
Tools: &tools,
}
should be written as
req1 := &conversation.Request{
Message: &messages,
Tools: &tools,
Temperature: 1,
}
@giterinhub Hi - thanks for being so quick to address feedback :) I am monitoring your PR in my email notifications so I can rereview asap for ya! I was curious if there were any issues with letting the conformance tests use the default vals on the model field in the test config instead of the || suffix approach?
@giterinhub Hi - thanks for being so quick to address feedback :) I am monitoring your PR in my email notifications so I can rereview asap for ya! I was curious if there were any issues with letting the conformance tests use the default vals on the model field in the test config instead of the || suffix approach?
Hi Sam, thanks for your attention. I had issues when the environment variables weren't set, and needed a way to reference both the environment variables and the default variables, and this seems the most elegant way...
thank you for your efforts and contribution! once build is green we can look to merge 🙌
@holopin-bot @giterinhub Thank you! Here's a digital badge as a small token of appreciation.
Congratulations @giterinhub, the maintainer of this repository has issued you a badge! Here it is: https://holopin.io/claim/cmi4gkvtm001mif04nyt8xqqe
This badge can only be claimed by you, so make sure that your GitHub account is linked to your Holopin account. You can manage those preferences here: https://holopin.io/account. Or if you're new to Holopin, you can simply sign up with GitHub, which will do the trick!