storm
storm copied to clipboard
[Bug] Issue with o3-mini Model in AzureOpenAIModel Due to max_tokens Parameter
Description:
When using o3-mini instead of gpt-4o in AzureOpenAIModel, the API call fails due to the use of max_tokens instead of max_completion_tokens. o3-mini does not support max_tokens, leading to an error when making requests.
Reproduction Steps:
- Set up a Flask API that calls
AzureOpenAIModelwitho3-mini. - Use
max_tokensin the request payload instead ofmax_completion_tokens. - Observe that the API returns an error indicating that
max_tokensis unsupported.
Flask App Code:
from flask import Flask, request, jsonify
from knowledge_storm.lm import AzureOpenAIModel
app = Flask(__name__)
@app.route('/api/test', methods=['POST'])
def test_api():
data = request.json
query = data.get('query', 'Test query')
max_tokens = data.get('max_tokens', 4000)
# Initialize model
llm = AzureOpenAIModel(
model="o3-mini",
api_key="your-api-key",
azure_endpoint="your-endpoint",
api_version="your-api-version",
max_tokens=max_tokens # This will cause an error with o3-mini
)
response = llm.basic_request(query)
return jsonify(response)
if __name__ == "__main__":
app.run(debug=True)
Expected Behavior:
AzureOpenAIModelshould recognize wheno3-miniis used and switchmax_tokenstomax_completion_tokensdynamically.
Actual Behavior:
- The request fails with an error:
{
"error": {
"code": "unsupported_parameter",
"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead."
}
}
Additional Context:
- The issue arises because
o3-miniuses a different parameter naming convention compared togpt-4o.