Tool Calling Issues with Open Source Models in OpenCode
Summary
While OpenCode provides excellent support for frontier cloud models, I've encountered some compatibility challenges when using open source models for tool calling functionality. This appears to be related to differences in how various open source models implement tool calling compared to the standardized approaches used by cloud providers.
Environment
-
OpenCode Version:
0.1.107 - Provider: Custom OpenAI-compatible endpoint (local models via Ollama)
-
Models Tested:
-
qwen2.5-coder:7b-instruct -
qwen2.5-coder:14b-instruct -
granite3.3:latest
-
Issues Encountered
1. Case Sensitivity in Tool Names
Expected: Model calls write tool (lowercase)
Actual: Models (like qwen) generate Write tool calls (capital W)
Error: AI_NoSuchToolError: Model tried to call unavailable tool 'Write'. Available tools: bash, edit, webfetch, glob, grep, list, lsp_diagnostics, lsp_hover, patch, read, write, todowrite, task, todoread.
Affected Models:
- qwen2.5-coder:7b-instruct
- qwen2.5-coder:14b-instruct
2. Complete Tool Calling Failure
Expected: Model generates tool calls when requested to create/edit files Actual: Models respond with text only, no tool calls generated
Affected Models:
- granite3.3:latest (despite
"tools": trueconfiguration)
Steps to Reproduce
- Configure OpenCode with local open source model:
{
"model": "local/qwen2.5-coder:7b-instruct",
"toolsEnabled": true,
"provider": {
"local": {
"api": "http://localhost:11434/v1",
"models": {
"qwen2.5-coder:7b-instruct": {
"tools": true,
"reasoning": true
}
}
}
}
}
- Ask model to create a file:
"Create a test.md file with 'Hello World' content" - Observe tool calling behavior
Expected Behavior
- Models should generate properly formatted lowercase tool calls (
write,edit, etc.) - Tool execution should work consistently across different open source models
- Configuration should clearly indicate which models support tool calling
Actual Behavior
- Inconsistent tool calling across models
- Case sensitivity issues preventing tool execution
- Some models fail to generate tool calls entirely
Additional Context
This appears to be related to the diverse ecosystem of open source models, where different models have varying tool calling implementations and training approaches. OpenCode works excellently with standardized cloud models, and extending this robust functionality to the more varied landscape of open source models presents some interesting technical challenges.
Possible Solutions
-
Tool name normalization: Implement automatic case conversion (
Write→write) - Model compatibility matrix: Document which open source models have been tested and verified to work
- Per-model tool configuration: Allow custom tool calling formats per model
Impact
Improving compatibility with open source models would help developers who prefer or require local/private model usage for privacy, security, cost, or infrastructure reasons.
I have similar problem, ollama models respond with information that they need to use tools and then they are not doing it. But from what I saw it was mostly for "/init" prompt, when I provide simple prompts like "list files in current directory" or "read log.txt file" it seems to work at least with qwen3:30b (did not check smaller models beucase I was too focused on making "/init" work)
Hi, in case someone else stumbles upon this. I have documented a solution to this. My issue was related to ollama forcing the context window of the local LLM's to be 4096, which cut off the context given by opencode ~10k tokens. Simply increase the context window of your model.
https://github.com/p-lemonish/ollama-x-opencode
@p-lemonish well this is the first time using "/no_think" prefix that devstral was able to create at least one file. I did use devstral with bigger context window before with TUI but nothing could be created, written in or anything.
Hi
I don't know if it helps, but I solved some tool errors without forgetting to call /init at first.