abacus: some model names are incorrect & raw output in chat
Description
Some abacus-provided models (those with / in the slug?) are not working. I confirmed this by creating a separate model entry in opencode.json with the corrected model name:
.
The models are also outputting their commands instead of executing them, tested with deepseek and claude opus:
For reference here is a full list of the canonical names for the models abacus provides from the API page.
route-llm
gpt-4o-2024-11-20
gpt-4o-mini
o4-mini
o3-pro
o3
o3-mini
gpt-4.1
gpt-4.1-mini
gpt-4.1-nano
gpt-5
gpt-5-mini
gpt-5-nano
gpt-5.1
gpt-5.1-chat-latest
gpt-5.2
gpt-5.2-chat-latest
openai/gpt-oss-120b
claude-3-7-sonnet-20250219
claude-sonnet-4-20250514
claude-opus-4-20250514
claude-opus-4-1-20250805
claude-sonnet-4-5-20250929
claude-haiku-4-5-20251001
claude-opus-4-5-20251101
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo
meta-llama/Meta-Llama-3.1-70B-Instruct
meta-llama/Meta-Llama-3.1-8B-Instruct
llama-3.3-70b-versatile
gemini-2.0-flash-001
gemini-2.0-pro-exp-02-05
gemini-2.5-pro
gemini-2.5-flash
gemini-3-pro-preview
gemini-3-flash-preview
qwen-2.5-coder-32b
Qwen/Qwen2.5-72B-Instruct
Qwen/QwQ-32B
Qwen/Qwen3-235B-A22B-Instruct-2507
Qwen/Qwen3-32B
qwen/qwen3-coder-480b-a35b-instruct
qwen3-max
grok-4-0709
grok-4-fast-non-reasoning
grok-4-1-fast-non-reasoning
grok-code-fast-1
kimi-k2-turbo-preview
deepseek/deepseek-v3.1
deepseek-ai/DeepSeek-V3.1-Terminus
deepseek-ai/DeepSeek-R1
deepseek-ai/DeepSeek-V3.2
zai-org/glm-4.5
zai-org/glm-4.6
zai-org/glm-4.7
And here is my project-local opencode.json
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"chrome-devtools": {
"type": "local",
"command": ["chrome-devtools-mcp", "--browserUrl", "http://127.0.0.1:9222"],
"enabled": true
},
"github": {
"type": "remote",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {
"Authorization": "Bearer {env:GITHUB_PAT_TOKEN}"
},
"enabled": true
},
"filesystem": {
"type": "local",
"command": ["npx", "-y", "@modelcontextprotocol/server-filesystem", "."],
"enabled": true
},
"git": {
"type": "local",
"command": ["uvx", "mcp-server-git", "--repository", "."],
"enabled": true
},
"fetch": {
"type": "local",
"command": ["uvx", "mcp-server-fetch"],
"enabled": true
},
"memory": {
"type": "local",
"command": ["npx", "-y", "@modelcontextprotocol/server-memory"],
"enabled": true
}
},
"provider": {
"abacus": {
"options": {
"baseURL": "https://routellm.abacus.ai/v1"
},
"models": {
"deepseek-ai/DeepSeek-V3.2": {
"name": "DeepSeek V3.2",
"family": "deepseek",
"release_date": "2025-06-15",
"last_updated": "2025-06-15",
"attachment": false,
"reasoning": true,
"temperature": true,
"tool_call": true,
"open_weights": true,
"cost": {
"input": 0.27,
"output": 0.4
},
"limit": {
"context": 128000,
"output": 8192
},
"modalities": {
"input": ["text"],
"output": ["text"]
}
}
}
}
}
}
Plugins
None
OpenCode version
1.1.21
Steps to reproduce
- add abacus provider
- use any model with a slash in the name (for raw output in chat, all models seem to be affected)
Screenshot and/or share link
No response
Operating System
macOS 26.0.1
Terminal
iTerm2
This issue might be a duplicate of existing issues. Please check:
- #6836: Providers: Abacus connectivity error - related to Abacus provider issues
- #6615: DeepSeek v3.2 - Azure Foundry - It works partially, tools and todos not - similar tool execution/raw output issues with DeepSeek
- #234: Tool Calling Issues with Open Source Models in OpenCode - general tool calling failures and command execution problems
These issues share similarities with the model name and raw output problems you're experiencing. Feel free to ignore if none of these address your specific case.
updated models list, opencode models --refresh to apply
@rekram1-node thanks that solved the first problem, but the issue with tool execution going into chat is still there, is this an Abacus problem, or a opencode one?
Okay I didn't setup the abacus provider im just trying to help yall out w/ it. My guess is that it doesnt have interleaved reasoning field set correctly in the models.dev models api.