obsidian-copilot icon indicating copy to clipboard operation
obsidian-copilot copied to clipboard

Custom Model Verification Fails: Hardcoded 'max_output_tokens=10' Ignores Global Setting (e.g., for OpenRouter + OpenAI o1-pro)

Open door9747 opened this issue 7 months ago • 1 comments

Bug Description: The model verification process for custom models appears to use a hardcoded max_output_tokens value of 10. This prevents verification for models requiring a higher minimum value (e.g., OpenAI's o1-pro needs max_output_tokens >= 16). This issue occurs even when the global "Token limit" in the plugin settings is set to a significantly higher value (e.g., 16000).

Steps to Reproduce:

  1. In Obsidian Copilot plugin settings, set the global "Token limit" (under LLM Parameters) to a high value (e.g., 16000).
  2. Attempt to add a new "Custom Chat Model" with the following details:
    • Model Name: openai/o1-pro
    • Provider: OpenRouter
    • Base URL: https://openrouter.ai/api/v1
    • API Key: A valid OpenRouter API key.
  3. Click the "Verify" button.

Expected Behavior: The verification API call should use the max_output_tokens value derived from the global "Token limit" setting, or provide a way to configure this specifically for the verification call. The model should verify successfully if all credentials are correct and the model's parameter requirements are met.

Actual Behavior: Model verification fails. The browser's developer console (Network tab) shows a 400 Bad Request response from OpenRouter. The raw JSON error response is:

{
  "error": {
    "message": "Provider returned error",
    "code": 400,
    "metadata": {
      "raw": "{\n  \"error\": {\n    \"message\": \"Invalid 'max_output_tokens': integer below minimum value. Expected a value >= 16, but got 10 instead.\",\n    \"type\": \"invalid_request_error\",\n    \"param\": \"max_output_tokens\",\n    \"code\": \"integer_below_min_value\"\n  }\n}",
      "provider_name": "OpenAI"
    }
  }
}

This indicates the verification call incorrectly sent max_output_tokens: 10, despite the global plugin setting being much higher.

Environment:

  • Obsidian Version: V1.8.10
  • Copilot Plugin Version: V2.8.9
  • Operating System: WIN

Suggested Fix: The model verification logic should be updated to respect the global "Token limit" setting for max_output_tokens or allow this parameter to be configurable for the verification step.

door9747 avatar May 13 '25 21:05 door9747

yeah, we will fixed this bug. you can skip the verification process, use this model(openrouter openai/o1-pro) directly

cc @logancyang see:

Image

Emt-lin avatar May 15 '25 09:05 Emt-lin

@Emt-lin I assigned this one to you for now, let me know if you'd like to let me do this one

logancyang avatar Jun 01 '25 23:06 logancyang

@Emt-lin I assigned this one to you for now, let me know if you'd like to let me do this one

I will fix it.

Emt-lin avatar Jun 02 '25 14:06 Emt-lin