Ollama Model connection
Describe the Bug
Windows 11 VSCode 1.104.0 Everything configured as the extension page in the Ollama section
Please tell us if you have customized any of the extension settings or whether you are using the defaults.
Hello! Do you have a suggestion about the error: "Error: AI_UnsupportedModelVersionError: Unsupported model version v1 for provider "ollama.chat" and model "qwen2.5:latest". AI SDK 5 only supports models that implement specification version "v2"." I tried with different models including qwen2.5 that is described in the ChatGPT connection instructions and just give me the error sent
Windows 11 VSCode 1.104.0
Thank you!
Additional context
No response
I have a similar issue.
Using ollama 0.12.6 on arch linux, with vscodium 1.105.16954 and version 4.10.0 of the extension.
The full error is:
Error: AI_UnsupportedModelVersionError: Unsupported model version v1 for provider "ollama.chat" and model "qwen2.5". AI SDK 5 only supports models that implement specification version "v2".
and changing the model to anything other than qwen2.5 changes that part but the v1 vs v2 issue I do not understand.
Running ollama list works fine, and I have verified this:
$ curl http://127.0.0.1:11434/
Ollama is running
Googling around, and looking into this further, it seems to be related to incompatibilities with AI SKD 5:
see https://github.com/sgomez/ollama-ai-provider/issues/47
Whereas the extension currently has ollama-ai-provider as a dep:
https://github.com/feiskyer/chatgpt-copilot/blob/e9bedf82aeb7c917116ccc8664cfe6313a89afdf/package.json#L750
rather than:
https://github.com/nordwestt/ollama-ai-provider-v2
I have submitted https://github.com/feiskyer/chatgpt-copilot/pull/628 to fix this.
reproduced on
VSCodium Version: 1.105.17075
Commit: 14bd1561ce547502e6ff1968090dc18c49160aab
Date: 2025-10-21T20:24:03.344Z
Electron: 37.6.0
ElectronBuildId: undefined
Chromium: 138.0.7204.251
Node.js: 22.19.0
V8: 13.8.258.32-electron.0
OS: Linux x64 6.8.0-88-generic snap
@feiskyer does this mean that local server is currently not supported?
I can confirm that with this included: https://github.com/feiskyer/chatgpt-copilot/pull/628 I have been running the local server for a little while now and have had no trouble there
#628 worked for me.
{ "$schema": "https://opencode.ai/config.json", "provider": { "ollama": { "npm": "@ai-sdk/openai-compatible", "name": "Ollama (Docker)", "options": { "baseURL": "http://localhost:11434/v1" }, "models": { "qwen2.5-coder:14b": { "name": "Qwen 2.5 Coder 14B", "tools": true } } } }, "model": "ollama/qwen2.5-coder:14b" }
Steps to Reproduce
- Configure opencode with Ollama as shown above
- Start opencode and send any message, e.g.: "Explain vectorized operations in pandas"
- Observe the model's response
Expected Behavior
Model responds to the user's question about pandas.
Actual Behavior
Model responds with one of:
- Raw JSON tool calls: {"name": "todoread", "arguments": {}}
- "I notice your message contains an empty array"
- Responses to the system prompt only, ignoring user input
Example response: Got it. I'm in read-only plan mode. I'll analyze the user's request...
Hello! I notice you've provided an empty array as your input.
Verification
- Ollama works correctly when called directly via curl: curl http://localhost:11434/v1/chat/completions -d '{ "model": "qwen2.5-coder:14b", "messages": [{"role": "user", "content": "Hello"}] }'
Returns proper response
- Issue persists with tools: false and without reasoning flag
- Issue affects both Plan and Build modes
- Other CLI tools (e.g., XandAI-CLI, aider) work correctly with the same Ollama setup
Likely Cause
The AI SDK appears to be formatting message content as a multimodal array format, but sending empty content. The model receives something like: {"role": "user", "content"
Thanks for the fixes. Published a new version (v4.10.2), let me know if there are still any issues
Tested and working on my end. Thanks for the merge 👍️