fix(write): support array content from Venice.ai models
Some AI models (e.g., GLM 4.6, Qwen 3 Coder 480B) send tool call content as arrays instead of strings. This causes the write tool to fail with: 'Invalid input: expected string, received array'
This fix uses Zod's preprocess function to handle both formats:
- String content: passed through unchanged
- Array content: joined with newlines
- Other types: converted to string
This maintains backward compatibility while enabling support for Venice.ai and similar models that use array-based content formatting.
Related: https://github.com/charmbracelet/crush/pull/1508
hey @georgeglarson, are you sure the models aren't calling the tool improperly ? does this only happen with the write tool or also other tools?
in the mean time, you can expose the experimental batch tool:
by enabling "experimental": { "batch_tool": true } in your opencode.json config file
hope this helps
Hey @spoons-and-mirrors! Great question.
TL;DR: The models are following the OpenAI spec correctly - they're allowed to send arrays. This happens specifically with the write tool because it's the most frequently used for large content generation.
Why arrays? According to the OpenAI function calling spec, tool call arguments can be sent as either strings or arrays. Models like GLM 4.6, Qwen 3 Coder 480B (and others from Venice.ai) choose to send large content as arrays to improve streaming performance and chunking.
Does it affect other tools?
Technically yes - any tool with string parameters could receive arrays. However, the write tool is hit most often because:
- It's used for every code file generation
-
- It handles the largest content blocks
-
- Models are more likely to chunk large outputs into arrays I checked the other tools (bash, edit, etc.) and they have similar string parameters that could theoretically receive arrays, but in practice it's rare because those tools typically receive shorter inputs.
The fix
Using z.preprocess() is the standard Zod pattern for handling this - it normalizes the input before validation. This is backward compatible and follows the same approach used in charmbracelet/crush#1508.
Seems like it works for me using glm 4.6 with z-ai as a provider
This setting does not make the error go away
"experimental": { "batch_tool": true }`
Z AI and others do work, its not a model issue per say, its the way the write/ediit tool is being used i believe
Not sure I understand what you mean, but as I showed in my screenshot, I can have glm do what this PR is supposedly trying to fix. Using the batch tool will make the error go away if the model uses the batch tool to do those parallel calls
So, the issue isnt the model itself its how the inference provider has it setup. Other tools, such as Cline and all the Cline forks already allow other types than just string for the write tool.
Let me rephrase that, the issue isnt how the provider has it setup, as its per OpenAI spec. What I mean is that opencode should allow this as other tools such as the Cline ones do.
@spoons-and-mirrors any chance this will get merged. Lots of venice.ai user are waiting for this fix 🙏🙏🙏
@znake I've pinged the team, not much more i can do. Are you sure this solves venice.ai user issues tho? how about other tools? did you try the batch tool ? which could help maybe?
@spoons-and-mirrors Ive tried enabling the experimental batching tool feature and it did not have any effect.
Can't wait for this PR to get merged!