Custom OpenAI-compatible provider options not being passed to API calls
Custom OpenAI-compatible provider options not being passed to API calls
Description
Summary
When using a custom provider with @ai-sdk/openai-compatible, the options (including baseURL and apiKey) configured in opencode.json are not being passed to the actual API calls. This results in NotFoundError because requests are being sent without the custom endpoint configuration.
Environment
- OpenCode version: 1.0.164
- OS: Windows 10 (Git Bash)
- Provider: Custom NewAPI endpoint (OpenAI-compatible)
Configuration
~/.config/opencode/opencode.json
{
"$schema": "https://opencode.ai/config.json",
"model": "my-newapi/glm-4.6",
"provider": {
"my-newapi": {
"npm": "@ai-sdk/openai-compatible",
"name": "NewAPI Local",
"options": {
"baseURL": "http://localhost:3000/v1"
},
"models": {
"glm-4.6": {
"name": "GLM-4.6"
}
}
}
}
}
~/.local/share/opencode/auth.json
{
"my-newapi": {
"type": "api",
"key": "sk-*********************"
}
}
Steps to Reproduce
- Set up a local NewAPI endpoint (OpenAI-compatible) at
http://localhost:3000/v1 - Add custom provider configuration in
opencode.jsonas shown above - Run
opencode auth loginand select "Other" to add API key formy-newapi - Start OpenCode and select the custom provider model
- Send a message (e.g., "你好")
Expected Behavior
- The
baseURLandapiKeyfrom configuration should be passed to the@ai-sdk/openai-compatibleprovider - API requests should be sent to
http://localhost:3000/v1with the configured API key - The custom endpoint should receive and respond to requests
Actual Behavior
Logs show empty options
INFO service=llm providerID=my-newapi modelID=glm-4.6 ... params={"options":{}} params
The options object is empty - baseURL is not being passed.
Error
ERROR service=default e=NotFoundError rejection
Requests fail because they're not being sent to the correct endpoint.
Additional Information
Verification
- The NewAPI endpoint works correctly (verified with other tools using the same endpoint)
- The same configuration pattern works with built-in providers
- OpenCode correctly loads the config file and recognizes the provider
- The bundled
@ai-sdk/openai-compatibleprovider is being used
Attempted Solutions
- ✗ Added
options.namefield (based on Issue #971) - no effect - ✗ Moved
apiKeytooptionsinstead ofauth.json- no effect - ✗ Used
{env:VARIABLE_NAME}syntax for apiKey - no effect - ✗ Changed provider ID to avoid conflicts - no effect
Related Issues
- #971 - Options not forwarded for openai-compatible providers
- #5210 - Custom OpenAI-compatible provider returns no text content
- #5163 - Custom baseURL configuration returns error
Root Cause
The bundled @ai-sdk/openai-compatible provider in OpenCode appears to not be forwarding the options from the config to the actual provider instance, resulting in API calls being made without the custom baseURL and other options.
Workaround
None found. Custom OpenAI-compatible endpoints are currently unusable with OpenCode.
Logs
From ~/.local/share/opencode/log/:
INFO service=provider providerID=my-newapi pkg=@ai-sdk/openai-compatible using bundled provider
INFO service=provider status=completed duration=0 providerID=my-newapi getSDK
INFO service=llm providerID=my-newapi modelID=glm-4.6 sessionID=ses_*** small=false agent=build params={"options":{}} params
ERROR service=default e=NotFoundError rejection
The critical line shows params={"options":{}} - the options that should contain baseURL are completely empty.
This issue might be a duplicate of existing issues. Please check:
- #971: Model options are not forwarded for openai-compatible providers unless provider.name option is given
- #5210: Custom OpenAI-compatible provider returns no text content
- #5163: Custom baseURL configuration returns error
Feel free to ignore if none of these address your specific case.
i should fix this today
is this issue fixed? im on 1.0.167 and the options sill {}
Hm I can't replicate this
Hm I can't replicate this
is there a way to verbose the logs? like where they hit, body, etc
@pyoif can u show me output of:
opencode debug config
and opencode auth list
@rekram1-node
opencode debug config
Output
{
"$schema": "https://opencode.ai/config.json",
"autoupdate": true,
"agent": {
"plan": {
"disable": true,
"prompt": "",
"name": "plan"
},
"build": {
"disable": true,
"prompt": "",
"name": "build"
},
"specify": {
"model": "mnnai/gemini-2.5-pro",
"temperature": 0.8,
"prompt": "## VARIABLE\n\nTEMPLATE_DIR = $XDG_CONFIG_HOME/opencode/templates\nSCRIPT_DIR = $XDG_CONFIG_HOME/opencode/scripts\nSCRIPT = {\nps: {SCRIPT_DIR}/bash/create-new-feature.sh --json \"{ARGS}\"\nbash: {SCRIPT_DIR}/powershell/create-new-feature.ps1 -Json \"{ARGS}\"\n}\n\n## OUTLINE\n\nImagine you are a project specification expert, you only has 1 job that is generating spec file. I will give you the feature description.\nGiven that feature description, do this:\n\n1. **Generate a concise short name** (2-4 words) for the branch:\n - Analyze the feature description and extract the most meaningful keywords\n - Create a 2-4 word short name that captures the essence of the feature\n - Use action-noun format when possible (e.g., \"add-user-auth\", \"fix-payment-bug\")\n - Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)\n - Keep it concise but descriptive enough to understand the feature at a glance\n - Examples:\n - \"I want to add user authentication\" → \"user-auth\"\n - \"Implement OAuth2 integration for the API\" → \"oauth2-api-integration\"\n - \"Create a dashboard for analytics\" → \"analytics-dashboard\"\n - \"Fix payment processing timeout bug\" → \"fix-payment-timeout\"\n\n2. **Check for existing branches before creating new one**:\n\n a. First, fetch all remote branches to ensure we have the latest information:\n\n ```bash\n git fetch --all --prune\n ```\n\n b. Find the highest feature number across all sources for the short-name:\n - Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`\n - Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`\n - Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`\n\n c. Determine the next available number:\n - Extract all numbers from all three sources\n - Find the highest number N\n - Use N+1 for the new branch number\n\n d. Run the script `{SCRIPT}` with the calculated number and short-name:\n - Pass `--number N+1` and `--short-name \"your-short-name\"` along with the feature description\n - Bash example: `{SCRIPT} --json --number 5 --short-name \"user-auth\" \"Add user authentication\"`\n - PowerShell example: `{SCRIPT} -Json -Number 5 -ShortName \"user-auth\" \"Add user authentication\"`\n\n **IMPORTANT**:\n - Check all three sources (remote branches, local branches, specs directories) to find the highest number\n - Only match branches/directories with the exact short-name pattern\n - If no existing branches/directories found with this short-name, start with number 1\n - You must only ever run this script once per feature\n - The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for\n - The JSON output will contain BRANCH_NAME and SPEC_FILE paths\n - For single quotes in args like \"I'm Groot\", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: \"I'm Groot\")\n\n3. Load `{TEMPLATE_DIR}/spec-template.md` to understand required sections.\n\n4. Follow this execution flow:\n 1. Parse user description from Input\n If empty: ERROR \"No feature description provided\"\n 2. Extract key concepts from description\n Identify: actors, actions, data, constraints\n 3. For unclear aspects:\n - Make informed guesses based on context and industry standards\n - Only mark with [NEEDS CLARIFICATION: specific question] if:\n - The choice significantly impacts feature scope or user experience\n - Multiple reasonable interpretations exist with different implications\n - No reasonable default exists\n - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**\n - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details\n 4. Fill User Scenarios & Testing section\n If no clear user flow: ERROR \"Cannot determine user scenarios\"\n 5. Generate Functional Requirements\n Each requirement must be testable\n Use reasonable defaults for unspecified details (document assumptions in Assumptions section)\n 6. Define Success Criteria\n Create measurable, technology-agnostic outcomes\n Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)\n Each criterion must be verifiable without implementation details\n 7. Identify Key Entities (if data involved)\n 8. Return: SUCCESS (spec ready for planning)\n\n5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.\n\n6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:\n\n a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:\n\n ```markdown\n # Specification Quality Checklist: [FEATURE NAME]\n\n **Purpose**: Validate specification completeness and quality before proceeding to planning\n **Created**: [DATE]\n **Feature**: [Link to spec.md]\n\n ## Content Quality\n\n - [ ] No implementation details (languages, frameworks, APIs)\n - [ ] Focused on user value and business needs\n - [ ] Written for non-technical stakeholders\n - [ ] All mandatory sections completed\n\n ## Requirement Completeness\n\n - [ ] No [NEEDS CLARIFICATION] markers remain\n - [ ] Requirements are testable and unambiguous\n - [ ] Success criteria are measurable\n - [ ] Success criteria are technology-agnostic (no implementation details)\n - [ ] All acceptance scenarios are defined\n - [ ] Edge cases are identified\n - [ ] Scope is clearly bounded\n - [ ] Dependencies and assumptions identified\n\n ## Feature Readiness\n\n - [ ] All functional requirements have clear acceptance criteria\n - [ ] User scenarios cover primary flows\n - [ ] Feature meets measurable outcomes defined in Success Criteria\n - [ ] No implementation details leak into specification\n\n ## Notes\n\n - Items marked incomplete require spec updates before calling plan agent\n ```\n\n b. **Run Validation Check**: Review the spec against each checklist item:\n - For each item, determine if it passes or fails\n - Document specific issues found (quote relevant spec sections)\n\n c. **Handle Validation Results**:\n - **If all items pass**: Mark checklist complete and proceed to step 6\n\n - **If items fail (excluding [NEEDS CLARIFICATION])**:\n 1. List the failing items and specific issues\n 2. Update the spec to address each issue\n 3. Re-run validation until all items pass (max 3 iterations)\n 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user\n\n - **If [NEEDS CLARIFICATION] markers remain**:\n 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec\n 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest\n 3. For each clarification needed (max 3), present options to user in this format:\n\n ```markdown\n ## Question [N]: [Topic]\n\n **Context**: [Quote relevant spec section]\n\n **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]\n\n **Suggested Answers**:\n\n | Option | Answer | Implications |\n | ------ | ------------------------- | ------------------------------------- |\n | A | [First suggested answer] | [What this means for the feature] |\n | B | [Second suggested answer] | [What this means for the feature] |\n | C | [Third suggested answer] | [What this means for the feature] |\n | Custom | Provide your own answer | [Explain how to provide custom input] |\n\n **Your choice**: _[Wait for user response]_\n ```\n\n 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:\n - Use consistent spacing with pipes aligned\n - Each cell should have spaces around content: `| Content |` not `|Content|`\n - Header separator must have at least 3 dashes: `|--------|`\n - Test that the table renders correctly in markdown preview\n 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)\n 6. Present all questions together before waiting for responses\n 7. Wait for user to respond with their choices for all questions (e.g., \"Q1: A, Q2: Custom - [details], Q3: B\")\n 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer\n 9. Re-run validation after all clarifications are resolved\n\n d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status\n\n7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (@plan agent).\n\n**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.\n\n## General Guidelines\n\n## Quick Guidelines\n\n- Focus on **WHAT** users need and **WHY**.\n- Avoid HOW to implement (no tech stack, APIs, code structure).\n- Written for business stakeholders, not developers.\n- DO NOT create any checklists that are embedded in the spec. That will be a separate command.\n\n### Section Requirements\n\n- **Mandatory sections**: Must be completed for every feature\n- **Optional sections**: Include only when relevant to the feature\n- When a section doesn't apply, remove it entirely (don't leave as \"N/A\")\n\n### For AI Generation\n\nWhen creating this spec from a user prompt:\n\n1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps\n2. **Document assumptions**: Record reasonable defaults in the Assumptions section\n3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:\n - Significantly impact feature scope or user experience\n - Have multiple reasonable interpretations with different implications\n - Lack any reasonable default\n4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details\n5. **Think like a tester**: Every vague requirement should fail the \"testable and unambiguous\" checklist item\n6. **Common areas needing clarification** (only if no reasonable default exists):\n - Feature scope and boundaries (include/exclude specific use cases)\n - User types and permissions (if multiple conflicting interpretations possible)\n - Security/compliance requirements (when legally/financially significant)\n\n**Examples of reasonable defaults** (don't ask about these):\n\n- Data retention: Industry-standard practices for the domain\n- Performance targets: Standard web/mobile app expectations unless specified\n- Error handling: User-friendly messages with appropriate fallbacks\n- Authentication method: Standard session-based or OAuth2 for web apps\n- Integration patterns: RESTful APIs unless specified otherwise\n\n### Success Criteria Guidelines\n\nSuccess criteria must be:\n\n1. **Measurable**: Include specific metrics (time, percentage, count, rate)\n2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools\n3. **User-focused**: Describe outcomes from user/business perspective, not system internals\n4. **Verifiable**: Can be tested/validated without knowing implementation details\n\n**Good examples**:\n\n- \"Users can complete checkout in under 3 minutes\"\n- \"System supports 10,000 concurrent users\"\n- \"95% of searches return results in under 1 second\"\n- \"Task completion rate improves by 40%\"\n\n**Bad examples** (implementation-focused):\n\n- \"API response time is under 200ms\" (too technical, use \"Users see results instantly\")\n- \"Database can handle 1000 TPS\" (implementation detail, use user-facing metric)\n- \"React components render efficiently\" (framework-specific)\n- \"Redis cache hit rate above 80%\" (technology-specific)",
"tools": {
"write": true,
"edit": true,
"bash": true
},
"description": "Create or update the feature specification from a natural language feature description.",
"mode": "primary",
"name": "specify"
}
},
"provider": {
"openrouter": {
"models": {
"nex-agi/deepseek-v3.1-nex-n1:free": {},
"moonshotai/kimi-k2:free": {}
}
},
"mnnai": {
"name": "MNN AI",
"npm": "@ai-sdk/openai-compatible",
"models": {
"grok-4.1-fast": {},
"grok-4-fast": {},
"gemini-2.5-pro": {
"name": "gemini-2.5-pro"
}
},
"options": {
"baseURL": "https://api.mnnai.ru/v1/"
}
}
},
"mode": {},
"plugin": [],
"command": {},
"username": "hiyurigi",
"keybinds": {
"leader": "ctrl+x",
"app_exit": "ctrl+c,ctrl+d,<leader>q",
"editor_open": "<leader>e",
"theme_list": "<leader>t",
"sidebar_toggle": "<leader>b",
"scrollbar_toggle": "none",
"username_toggle": "none",
"status_view": "<leader>s",
"session_export": "<leader>x",
"session_new": "<leader>n",
"session_list": "<leader>l",
"session_timeline": "<leader>g",
"session_share": "none",
"session_unshare": "none",
"session_interrupt": "escape",
"session_compact": "<leader>c",
"messages_page_up": "pageup",
"messages_page_down": "pagedown",
"messages_half_page_up": "ctrl+alt+u",
"messages_half_page_down": "ctrl+alt+d",
"messages_first": "ctrl+g,home",
"messages_last": "ctrl+alt+g,end",
"messages_last_user": "none",
"messages_copy": "<leader>y",
"messages_undo": "<leader>u",
"messages_redo": "<leader>r",
"messages_toggle_conceal": "<leader>h",
"tool_details": "none",
"model_list": "<leader>m",
"model_cycle_recent": "f2",
"model_cycle_recent_reverse": "shift+f2",
"model_cycle_favorite": "none",
"model_cycle_favorite_reverse": "none",
"command_list": "ctrl+p",
"agent_list": "<leader>a",
"agent_cycle": "tab",
"agent_cycle_reverse": "shift+tab",
"input_clear": "ctrl+c",
"input_paste": "ctrl+v",
"input_submit": "return",
"input_newline": "shift+return,ctrl+return,alt+return,ctrl+j",
"input_move_left": "left,ctrl+b",
"input_move_right": "right,ctrl+f",
"input_move_up": "up",
"input_move_down": "down",
"input_select_left": "shift+left",
"input_select_right": "shift+right",
"input_select_up": "shift+up",
"input_select_down": "shift+down",
"input_line_home": "ctrl+a",
"input_line_end": "ctrl+e",
"input_select_line_home": "ctrl+shift+a",
"input_select_line_end": "ctrl+shift+e",
"input_visual_line_home": "alt+a",
"input_visual_line_end": "alt+e",
"input_select_visual_line_home": "alt+shift+a",
"input_select_visual_line_end": "alt+shift+e",
"input_buffer_home": "home",
"input_buffer_end": "end",
"input_select_buffer_home": "shift+home",
"input_select_buffer_end": "shift+end",
"input_delete_line": "ctrl+shift+d",
"input_delete_to_line_end": "ctrl+k",
"input_delete_to_line_start": "ctrl+u",
"input_backspace": "backspace,shift+backspace",
"input_delete": "ctrl+d,delete,shift+delete",
"input_undo": "ctrl+-,super+z",
"input_redo": "ctrl+.,super+shift+z",
"input_word_forward": "alt+f,alt+right,ctrl+right",
"input_word_backward": "alt+b,alt+left,ctrl+left",
"input_select_word_forward": "alt+shift+f,alt+shift+right",
"input_select_word_backward": "alt+shift+b,alt+shift+left",
"input_delete_word_forward": "alt+d,alt+delete,ctrl+delete",
"input_delete_word_backward": "ctrl+w,ctrl+backspace,alt+backspace",
"history_previous": "up",
"history_next": "down",
"session_child_cycle": "<leader>right",
"session_child_cycle_reverse": "<leader>left",
"terminal_suspend": "ctrl+z",
"terminal_title_toggle": "none"
}
}
opencode auth list
Output
┌ Credentials ~/.local/share/opencode/auth.json
│
● OpenRouter api
│
● mnnai api
│
└ 2 credentials
I already tried hitting the provider using curl, its working without a problem
curl https://api.mnnai.ru/v1/chat/completions \
-H "Authorization: Bearer xxx" \
-H "Content-Type: application/json" \
-d '{"model": "gemini-2.5-pro", "max_tokens": 32000, "messages": [{"role": "user", "content": "How can I integrate OpenAI-compatible APIs?"}], "stream": true}'
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "Of course! Integrating OpenAI-compatible APIs is a powerful way to leverage the vast ecosystem of tools built for OpenAI while using a variety of different model providers. This could be for cost savings, access to specialized models, better performance, or open-"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "source flexibility.\n\nThe key advantage is that you often only need to change a few lines of code to switch providers.\n\nHere is a comprehensive guide on how to do it, broken down into key steps.\n\n---\n\n### 1. Understand the Core Concept\n\nAn \"OpenAI-compatible API\" means that"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " a third-party provider has designed their API endpoint to accept requests in the same format as OpenAI's API and return responses in the same format.\n\nThis means you can use popular clients like OpenAI's official Python and JavaScript libraries by simply pointing them to a different server.\n\nThe two most important parameters you'"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "ll need to change are:\n\n1. `api_key`: The authentication token from your new provider.\n2. `base_url`: The URL of the new provider's API endpoint.\n\n---\n\n### 2. Choose an OpenAI-Compatible Provider\n\nFirst, you need an account with a provider"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " that offers an OpenAI-compatible endpoint. Here are some popular categories and examples:\n\n| Category | Provider Examples | Key Features |\n| :--- | :--- | :--- |\n| **Cloud Platforms** | **Azure OpenAI Service** | Enterprise-grade security, private networking, integration with Azure services. |\n|"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " **Model Providers** | **Together AI**, **Anyscale**, **Perplexity**, **Fireworks AI** | Access to dozens of open-source models (Llama, Mixtral, Code Llama) with very competitive pricing. |\n| **Self-Hosted / Local** | **LM Studio**, **O"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "llama**, **vLLM**, **Text-Generation-WebUI** | Run models on your own hardware for maximum privacy, control, and no inference costs. |\n\nFor this guide, let's use **Together AI** as a primary example because it's a great showcase for using popular open-source models"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092945, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ". The same principles apply to all other providers.\n\n---\n\n### 3. Step-by-Step Integration Guide (Using Python)\n\nLet's assume you have existing code that uses the `openai` Python library.\n\n#### Step 3.1: Get Your Credentials\n\n1. **Sign up**"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " for your chosen provider (e.g., go to [Together.ai](https://www.together.ai/) and create an account).\n2. **Find your API Key**. This is usually in your account settings or a dedicated \"API Keys\" section.\n3. **Find the `base_url`"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "**. The provider's documentation will specify this.\n * For **Together AI**, it is: `https://api.together.xyz/v1`\n * For **LM Studio** (local), it's typically: `http://localhost:1234/v1`\n"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " * For **Ollama** (local, using a proxy), it could be: `http://localhost:8000/v1`\n\n#### Step 3.2: Install the `openai` Library\n\nIf you don't already have it, install the official library.\n\n```bash\n"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "pip install openai\n```\n\n#### Step 3.3: Modify Your Code\n\nHere is the \"before\" and \"after\" to show how simple the change is.\n\n**Before: Standard OpenAI Integration**\n\nThis code calls OpenAI's `gpt-3.5-turbo`.\n\n```python\nimport os"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\nfrom openai import OpenAI\n\n# Get API key from environment variables\n# os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" \n\nclient = OpenAI() # Initializes with OPENAI_API_KEY and default base_url\n\nresponse = client.chat.completions.create("}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\n model=\"gpt-3.5-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"What are the three largest cities in the world by population?\"},\n ]\n)"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\n\nprint(response.choices[0].message.content)\n```\n\n**After: OpenAI-Compatible Integration (e.g., Together AI)**\n\nThis code calls `Mixtral-8x7B` via Together AI.\n\n```python\nimport os\nfrom openai import OpenAI\n\n# 1."}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Set the API key for your new provider\n# It's best practice to use environment variables\nos.environ[\"TOGETHER_API_KEY\"] = \"your-together-api-key-here\"\n\n# 2. Initialize the client, pointing it to the new provider's API\nclient = OpenAI(\n"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " api_key=os.environ.get(\"TOGETHER_API_KEY\"),\n base_url=\"https://api.together.xyz/v1\", # 3. Set the base_url\n)\n\n# 4. Use the model name from your new provider\nresponse = client.chat."}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092950, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "completions.create(\n model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\", \n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"What are"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " the three largest cities in the world by population?\"},\n ]\n)\n\nprint(response.choices[0].message.content)\n```\n\n**Key Changes Summary:**\n\n1. **`api_key`**: You now pass the API key from Together AI.\n2. **`"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "base_url`**: You explicitly set the `base_url` to `https://api.together.xyz/v1`.\n3. **`model`**: You must use a model identifier available on that provider's platform (e.g., `\"mistralai/Mixtral-8x7B"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "-Instruct-v0.1\"` instead of `\"gpt-3.5-turbo\"`). Check your provider's documentation for a list of available models.\n\n---\n\n### 4. Handling Provider-Specific Differences\n\nWhile the API structure is compatible, be aware of some potential differences:\n\n* **Model Parameters"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "**: Some providers might not support every parameter from OpenAI's API (e.g., `logprobs`, `top_logprobs`). Check their documentation.\n* **Streaming Support**: Most providers support streaming responses, and it works identically. Just use `stream=True` in your `create` call and"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " iterate over the chunks.\n* **Error Handling**: Error codes and messages might differ slightly, though many try to emulate OpenAI's `4xx` and `5xx` status codes.\n* **Dummy API Key for Local Models**: When running models locally with tools like LM Studio or Ollama, the"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " server often doesn't require an API key. You can usually pass any non-empty string.\n\n**Example: Using a Local Model with LM Studio**\n\n1. In LM Studio, load a model and start the server (from the `</>` Local Server tab).\n2. Note the server"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " URL, which is usually `http://localhost:1234/v1`.\n\nYour Python code would look like this:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n base_url=\"http://localhost:1234/v1\", # Point to local server\n api"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "_key=\"not-needed\" # A dummy key is required, but the value doesn't matter\n)\n\nresponse = client.chat.completions.create(\n # The model name is often \"local-model\" or can be found in the LM Studio UI\n model=\"local-"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "model\", \n messages=[\n {\"role\": \"user\", \"content\": \"Explain the concept of recursion in one paragraph.\"}\n
],\n temperature=0.7,\n)\n\nprint(response.choices[0].message.content)\n```\n\n### 5. Best Practices for Production"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092956, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Code\n\nFor more robust applications, consider these practices:\n\n* **Use Environment Variables**: Never hardcode API keys or
base URLs. Use a `.env` file or your deployment platform's secret management.\n* **Create a Centralized Client Factory**: To easily switch between providers for testing or different tasks"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092958, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ", create a function that returns a pre-configured client.\n\n**Example Client Factory:**\n\n```python\nfrom openai import OpenAI\nimport os\n\ndef get_openai_client(provider=\"openai\"):\n \"\"\"\n Returns a configured OpenAI client for a specific provider.\n \n Args:\n provider ("}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092958, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "str): 'openai', 'together', or 'local'.\n \"\"\"\n if provider == \"together\":\n return OpenAI(\n api_key=os.environ.get(\"TOGETHER_API_KEY\"),\n base_url=\"https://api.together.xyz/v1\",\n"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092958, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " )\n elif provider == \"local\":\n return OpenAI(\n base_url=\"http://localhost:1234/v1\",\n api_key=\"not-needed\",\n )\n else: # Default to OpenAI\n return OpenAI(api_key=os"}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092958, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ".environ.get(\"OPENAI_API_KEY\"))\n\n# --- Usage ---\n# client = get_openai_client(\"together\")\n# response = client.chat.completions.create(...)\n\n# client = get_openai_client(\"local\")\n# response = client.chat.completions."}}]}
data: {"id": "text-mnn-7ca1d813-8b64-4a7c-9e54-029b78faff4e", "object": "chat.completion", "created": 1766092958, "model": "gemini-2.5-pro", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "create(...)\n```\n\nThis approach makes your application flexible and easy to configure, allowing you to switch LLM backends without changing your core application logic."}}]}
data: {"choices": [{"index": 0, "delta": {"role": "assistant", "content": ""}, "finish_reason": "stop"}], "usage": {"prompt_tokens": 9, "completion_tokens": 1914, "total_tokens": 1923}}
data: [DONE]
Similar issue here, using openai-compatable with OpenWebUI API, the same api works for other tools.
@rekram1-node
I can confirm that opencode is hitting my provider endpoint correctly, and its consume my credits (in + out token), but no output in the opencode
Can u show me:
opencode run hello --print-logs --model mnnai/gemini-2.5-pro
I can't replicate your issue but I did find that in some cases the error returned was hard to read
Can u show me:
opencode run hello --print-logs --model mnnai/gemini-2.5-proI can't replicate your issue but I did find that in some cases the error returned was hard to read
I taking it into another levels, created a local proxy to log the http request and response, now i know where is the problem and fix it.
is it in our code?
in my case its provider issue, the provider doesnt response with a valid response when using tools, also i can confirm this issue is not opencode issue, opencode hit the api correctly (confirmed using my local proxy), its provider issue.
@ShiroEirin @J-Light can either of u share configs and the logs?