Custom Model Verification Fails: Hardcoded 'max_output_tokens=10' Ignores Global Setting (e.g., for OpenRouter + OpenAI o1-pro)
Bug Description:
The model verification process for custom models appears to use a hardcoded max_output_tokens value of 10. This prevents verification for models requiring a higher minimum value (e.g., OpenAI's o1-pro needs max_output_tokens >= 16). This issue occurs even when the global "Token limit" in the plugin settings is set to a significantly higher value (e.g., 16000).
Steps to Reproduce:
- In Obsidian Copilot plugin settings, set the global "Token limit" (under LLM Parameters) to a high value (e.g., 16000).
- Attempt to add a new "Custom Chat Model" with the following details:
- Model Name:
openai/o1-pro - Provider:
OpenRouter - Base URL:
https://openrouter.ai/api/v1 - API Key: A valid OpenRouter API key.
- Model Name:
- Click the "Verify" button.
Expected Behavior:
The verification API call should use the max_output_tokens value derived from the global "Token limit" setting, or provide a way to configure this specifically for the verification call. The model should verify successfully if all credentials are correct and the model's parameter requirements are met.
Actual Behavior:
Model verification fails. The browser's developer console (Network tab) shows a 400 Bad Request response from OpenRouter. The raw JSON error response is:
{
"error": {
"message": "Provider returned error",
"code": 400,
"metadata": {
"raw": "{\n \"error\": {\n \"message\": \"Invalid 'max_output_tokens': integer below minimum value. Expected a value >= 16, but got 10 instead.\",\n \"type\": \"invalid_request_error\",\n \"param\": \"max_output_tokens\",\n \"code\": \"integer_below_min_value\"\n }\n}",
"provider_name": "OpenAI"
}
}
}
This indicates the verification call incorrectly sent max_output_tokens: 10, despite the global plugin setting being much higher.
Environment:
- Obsidian Version:
V1.8.10 - Copilot Plugin Version:
V2.8.9 - Operating System:
WIN
Suggested Fix:
The model verification logic should be updated to respect the global "Token limit" setting for max_output_tokens or allow this parameter to be configurable for the verification step.
yeah, we will fixed this bug. you can skip the verification process, use this model(openrouter openai/o1-pro) directly
cc @logancyang see:
@Emt-lin I assigned this one to you for now, let me know if you'd like to let me do this one
@Emt-lin I assigned this one to you for now, let me know if you'd like to let me do this one
I will fix it.