opencode
opencode copied to clipboard
feat(core): implement Recursive Language Model (RLM) support
Add experimental RLM infrastructure based on the paper "Recursive Language Models" (Zhang et al., 2025). This allows processing arbitrarily long contexts by treating prompts as part of an external Python REPL environment.
Key additions:
- repl tool: Persistent Python REPL with llm_query() for recursive sub-calls
- rlm agent: Specialized subagent for long-context processing
- rlm-sub agent: Hidden subagent for handling recursive LLM queries
- FINAL()/FINAL_VAR() response patterns for returning answers
The RLM approach enables:
- Processing contexts 10-100x beyond normal context windows
- Programmatic examination and decomposition of long inputs
- Recursive self-invocation over context snippets
- Code-based filtering before semantic analysis
Enable via: experimental.repl_tool = true in config