Assert temperature in argparser
Different models have different temperature ranges. It's better to assert its value dynamically based on the model used: https://github.com/google/oss-fuzz-gen/blob/main/run_all_experiments.py#L246
Related: #366
Hi @DonggeLiu ! I'd like to help with this issue. Could it be assigned to me?
Hi @DonggeLiu ! I'd like to help with this issue. Could it be assigned to me?
Sure, do you know the temperature ranges of the models we use?
Hi @DonggeLiu ! I'd like to help with this issue. Could it be assigned to me?
Sure, do you know the temperature ranges of the models we use?
I've found the following temperature ranges:
| Model name | Vertex AI Model Name (if applicable) | Temperature Range | Default Temperature |
|---|---|---|---|
| vertex_ai_code-bison | code-bison | 0.0 - 1.0 | 0.2 |
| vertex_ai_code-bison-32k | code-bison-32k | 0.0 - 1.0 | 0.2 |
| vertex_ai_gemini-pro | gemini-1.0-pro | 0.0 - 1.0 | 0.9 |
| vertex_ai_gemini-ultra | gemini-ultra | ? | ? |
| vertex_ai_gemini-experimental | gemini-experimental | 0.0 - 2.0 | 1.0 |
| vertex_ai_gemini-1-5 vertex_ai_gemini-1-5-chat |
gemini-1.5-pro-002 | 0.0 - 2.0 | 1.0 |
| vertex_ai_gemini-2-flash vertex_ai_gemini-2-flash-chat |
gemini-2.0-flash-001 | 0.0 - 2.0 | 1.0 |
| vertex_ai_gemini-2 vertex_ai_gemini-2-chat |
gemini-2.0-pro-exp-02-05 | 0.0 - 2.0 | 1.0 |
| vertex_ai_gemini-2-think vertex_ai_gemini-2-think-chat |
gemini-2.0-flash-thinking-exp-01-21 | 0.0 - 2.0 | 0.7 |
| vertex_ai_claude-3-haiku | claude-3-haiku@20240307 | 0.0 - 1.0 | 0.5 |
| vertex_ai_claude-3-opus | claude-3-opus@20240229 | 0.0 - 1.0 | 0.5 |
| vertex_ai_claude-3-5-sonnet | claude-3-5-sonnet@20240620 | 0.0 - 1.0 | 0.5 |
| gpt-3.5-turbo gpt-3.5-turbo-azure |
- | 0.0 - 2.0 | 1.0 |
| gpt-4 gpt-4-azure |
- | 0.0 - 2.0 | 1.0 |
| gpt-4o gpt-4o-azure |
- | 0.0 - 2.0 | 1.0 |
| gpt-4o-mini | - | 0.0 - 2.0 | 1.0 |
| gpt-4-turbo | - | 0.0 - 2.0 | 1.0 |
Regarding the Gemini Ultra model I didn't find the info unfortunately.
Wow thanks for the data table, @dberardi99 !
Regarding the Gemini Ultra model I didn't find the info unfortunately.
That's fine, let's use range 0.0 - 1.0 and default 0.2 for now.