opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Ollama: User message content arrives as empty array - model cannot see user input

Open lowcoordination opened this issue 1 month ago • 18 comments

When using opencode with Ollama via the @ai-sdk/openai-compatible provider, the model receives user messages as empty arrays instead of the actual text content. The model can see the system prompt but not the user's input.

Environment

  • opencode version: 1.0.134
  • Ollama version: 0.13.0
  • OS: Fedora Linux (kernel 6.17.9-300.fc43.x86_64)
  • Provider package: @ai-sdk/openai-compatible

Config

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama (Docker)",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {
        "qwen2.5-coder:14b": {
          "name": "Qwen 2.5 Coder 14B",
          "tools": true
        }
      }
    }
  },
  "model": "ollama/qwen2.5-coder:14b"
}

Steps to Reproduce

1. Configure opencode with Ollama as shown above
2. Start opencode and send any message, e.g.: "Explain vectorized operations in pandas"
3. Observe the model's response

Expected Behavior

Model responds to the user's question about pandas.

Actual Behavior

Model responds with one of:
- Raw JSON tool calls: {"name": "todoread", "arguments": {}}
- "I notice your message contains an empty array"
- Responses to the system prompt only, ignoring user input

Example response:
Got it. I'm in read-only plan mode. I'll analyze the user's request...

Hello! I notice you've provided an empty array as your input.

Verification

- Ollama works correctly when called directly via curl:
curl http://localhost:11434/v1/chat/completions -d '{
  "model": "qwen2.5-coder:14b",
  "messages": [{"role": "user", "content": "Hello"}]
}'
# Returns proper response
- Issue persists with tools: false and without reasoning flag
- Issue affects both Plan and Build modes
- Other CLI tools (e.g., XandAI-CLI, aider) work correctly with the same Ollama setup

Likely Cause

The AI SDK appears to be formatting message content as a multimodal array format, but sending empty content. The model
receives something like:
{"role": "user", "content": []}
instead of:
{"role": "user", "content": "user's actual message"}

lowcoordination avatar Dec 07 '25 02:12 lowcoordination

This issue might be a duplicate of existing issues. Please check:

  • #5028: Call Missing content in messages - similar issue with empty content blocks in messages sent to OpenAI SDK
  • #5104: Model in @ai-sdk/openai sdk fails on the 2nd request - issue with @ai-sdk/openai SDK message handling
  • #4255: OpenCode v1.0.25 Hangs Indefinitely with LM Studio + Qwen Models Due to Empty tool_calls Array - related issue with OpenAI-compatible providers handling empty tool_calls arrays, causing hangs

These issues all involve problems with how the AI SDK formats and sends message content to OpenAI-compatible endpoints. Your issue specifically describes message content arriving as empty arrays instead of actual text, which appears to be a related message formatting problem.

Feel free to ignore if none of these address your specific case.

github-actions[bot] avatar Dec 07 '25 02:12 github-actions[bot]

looks like it could be related to #5028

lowcoordination avatar Dec 07 '25 18:12 lowcoordination

@lowcoordination what's the context size you have set for ollama? it defaults to 4k or something and that makes it unusable

rekram1-node avatar Dec 07 '25 18:12 rekram1-node

I just posted about what seems like a similar issue: https://github.com/sst/opencode/issues/5210

cgilly2fast avatar Dec 07 '25 19:12 cgilly2fast

Same issue here. Just noticed abotu this project, isntalled and cloud models works perfectly but local llm not.

DevGiuDev avatar Dec 08 '25 13:12 DevGiuDev

same issue here

opencode version: 1.0.142
Ollama version: 0.13.2
OS: Ubuntu 24.04.3 (kernel 6.8.0-88-generic)
Provider package: @ai-sdk/openai-compatible

.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama (local)",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {
        "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL": {
          "name": "hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL"
        }
      }
    }
  },
  "model": "ollama/hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL"
}
$ ollama show hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:UD-Q4_K_XL
  Model
    architecture        qwen3moe    
    parameters          30.5B       
    context length      262144      
    embedding length    2048        
    quantization        unknown     

  Capabilities
    completion    
    tools         

  Parameters
    repeat_penalty    1.05              
    top_k             20                
    top_p             0.8               
    stop              "<|im_start|>"    
    stop              "<|im_end|>"      
    temperature       0.7               
    min_p             0                 

minger0 avatar Dec 10 '25 20:12 minger0

@lowcoordination what's the context size you have set for ollama? it defaults to 4k or something and that makes it unusable

@rekram1-node see context length for model, I expect ollama uses that of the model and defaults only when it is left undefined

minger0 avatar Dec 10 '25 20:12 minger0

I'm not familiar enough with ollama to be best here, but it's not like we aren't sending certain messages, something is happening on the ollama side where it drops content before it gets to llm. Now typically that is num_context not being set to a good value, on the ollama side.

Last I checked you still cant set it from the openai compatible endpoint so its on u to set it

rekram1-node avatar Dec 10 '25 23:12 rekram1-node

@rekram1-node as @lowcoordination pointed out, it may be because of the message formatting, i.e. array (observed) vs plain string (expected), do you recognize that?

minger0 avatar Dec 11 '25 18:12 minger0

What I can confirm is that the ollama docs have got only examples with plain strings. See also link.

minger0 avatar Dec 11 '25 18:12 minger0

Did ollama ship a breaking change or something? I feel like people had opencode working w/ their setups for a while idk why we would need a new format....

Like I said im not super familiar w/ ollama so I will need to investigate further just trying to figure out why it would stop working all the sudden...

rekram1-node avatar Dec 11 '25 19:12 rekram1-node

hey sorry it took me a minute to get back here. i believe i had the context set to 16k as i figured agentic work would require a larger window. and i didnt have this problem all of a sudden I never really got it to work but i only started messing with it a couple of days before this happened.

lowcoordination avatar Dec 11 '25 20:12 lowcoordination

https://github.com/sst/opencode/blob/9d73096db0ea2eb9e11c48b287855574651f33af/packages/opencode/src/agent/agent.ts#L259 it looks to me that the agent.ts did not change in this respect in the last 5 months, which was also the first version, and neither saw I changes in ollama, which would imply that ollama has never worked in agent mode. I need to add that this is the first time I checked the code base of both projects. @lowcoordination do you mean to suggest it used to work in agent mode? Looking forward to seeing Augstif's change.

minger0 avatar Dec 12 '25 08:12 minger0

No sorry if I wasn’t clear. I never got it to work. I Took a couple different passes at it but I never got the output to work

Thanks,

Nick Krestakos

Sent from Proton Mail for iOS.

-------- Original Message -------- On Friday, 12/12/25 at 03:22 minger0 @.***> wrote:

minger0 left a comment (sst/opencode#5187)

https://github.com/sst/opencode/blob/9d73096db0ea2eb9e11c48b287855574651f33af/packages/opencode/src/agent/agent.ts#L259 it looks to me that the agent.ts did not change in this respect in the last 5 months, which was also the first version, and neither saw I changes in ollama, which would imply that ollama has never worked in agent mode. I need to add that this is the first time I checked the code base of both projects. @.***(https://github.com/lowcoordination) do you mean to suggest it used to work in agent mode? Looking forward to seeing Augstif's change.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

lowcoordination avatar Dec 12 '25 08:12 lowcoordination

@rekram1-node the preliminary conclusion is that the ollama agent mode has never worked, and that applying a string conversion on item in https://github.com/sst/opencode/blob/9d73096db0ea2eb9e11c48b287855574651f33af/packages/opencode/src/agent/agent.ts#L259 could potentially resolve the issue. Do you think you can help us from here?

minger0 avatar Dec 13 '25 13:12 minger0

@rekram1-node https://github.com/feiskyer/chatgpt-copilot/issues/591#issuecomment-3443179072 could be also relevant. If I get this right, the openai and ollama v1 became incompatible, while ollama v2 api is compatible again.

minger0 avatar Dec 13 '25 20:12 minger0