[FEATURE]: Unix-Style Piping for Custom Tools - Token-Efficient Tool Composition
Feature hasn't been suggested before.
- [x] I have verified this feature I'm about to request hasn't been suggested before.
Describe the enhancement you want to request
OpenCode's custom tools system is uniquely brilliant - drop a .ts file in .opencode/tool/ and it's immediately available. This gives us security through control that raw CLI access can't match. We can wrap dangerous operations, validate inputs, and audit every interaction. This is one of your best features. I can't believe the other agentic runtimes are yet to copy you guys!
The Unix Way: Do One Thing Well
Unix philosophy: simple tools that each do one thing well, composed via pipes. Currently agents must see intermediate data between tool calls, wasting tokens and polluting context.
Proposed Solution: Optional Tool Piping
// .opencode/tool/grep.ts
import { tool } from "@opencode-ai/plugin"
export default tool({
description: "Search for patterns in text",
args: {
pattern: tool.schema.string().describe("Regex pattern to search for"),
text: tool.schema.string().optional().describe("Text to search (if not piped)"),
},
pipeable: {
accepts: true, // Can receive piped input
provides: true, // Can provide piped output
},
async execute(args, pipe = null) { //pipe is string
const input = pipe || args.text || ""
// Do grep work...
const matches = input.match(new RegExp(args.pattern, 'g')) || []
const result = matches.join('\n')
return result
},
})
// .opencode/tool/count.ts
import { tool } from "@opencode-ai/plugin"
export default tool({
description: "Count lines from text",
args: {
text: tool.schema.string().optional().describe("Text to count lines in (if not piped)"),
},
pipeable: {
accepts: true, // Can receive piped input
provides: false, // Typically terminal in pipe chain
},
async execute(args, pipe = null) {
const input = pipe || args.text || ""
return input.split('\n').filter(line => line.trim()).length.toString()
},
})
Agent Usage
Agent: "Search for TODO comments in codebase and count them"
OpenCode uses metadata to build pipeline: grep(pattern="TODO") | count()
- grep can provide output, count can accept input → valid pipe
- Calls grep with pipe=null, gets output
- Calls count with pipe=<grep_output>
- Returns final result to agent
Metadata Benefits
The pipeable metadata helps the agent:
- Discover compatible chains - only pipe accepts→provides
- Plan efficient workflows - avoid unnecessary intermediate context
- Validate before execution - catch invalid combinations early
- Understand tool capabilities - know which tools are composable
As far as I can tell, the metadata is purely for the agent's benefit for planning compositions. It would have zero impact on how you implement composed pipes.
Key Principles
- Tools are isolated - no knowledge of pipeline position
- pipe is just text - stdin equivalent with line breaks
- args work independently - tools can be used with or without pipes
- OpenCode handles plumbing - not tools
Token Efficiency
Intermediate data never enters LLM context. For large operations, this saves massive tokens. This restores Unix's composability while maintaining OpenCode's security model, with metadata that helps agents make intelligent composition decisions.
The focus of this request is to conserve context, prevent the context window from seeing intermediary results, to keep it lean. If you have a better strategy you've been using which achieves the same and relegates this approach to overkill, please share and I'll close.