opencode icon indicating copy to clipboard operation
opencode copied to clipboard

[FEATURE]: Optimize prompts using principle constraints and DSL compression

Open d0lwl0b opened this issue 3 days ago • 3 comments

Feature hasn't been suggested before.

  • [x] I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

Problem

OpenCode's prompts are verbose and sometimes contradictory, causing:

  • Unnecessary token consumption
  • Unpredictable LLM outputs
  • Difficulty maintaining consistency

Core Solution: Two Complementary Approaches

1. Principle-Based Constraints

Instead of describing behaviors explicitly, reference established design principles from LLM training data:

  • UNIX Philosophy (for modular, single-purpose components)
  • KISS Principle (for simplicity)
  • YAGNI (to avoid over-engineering)
  • SOLID (for object-oriented design)

Example:
Instead of: "Make it simple, don't add unnecessary features, focus on one thing..."
Use: "Apply KISS and YAGNI principles."

Benefits:

  • Reduces token count by 60-80%
  • Eliminates instruction conflicts via internally consistent frameworks
  • Leverages LLM's existing knowledge

2. DSL Context Compression

Create structured templates to filter noise in extended conversations:

[CONTEXT_SUMMARY]
CORE_ISSUE:: <main problem>
KEY_POINTS:: <bullet points>
ACTION_ITEMS:: <next steps>
[END_SUMMARY]

Information Theory Rationale:

  • Acts as entropy-reducing encoder
  • Implements lossy compression preserving semantic essence
  • Filters low-information noise

Why This Works

Both methods address root causes:

  • Principles compress complex concepts into single references
  • DSL templates enforce structure, eliminating ambiguity
  • Together they create concise, predictable prompts

Suggested First Steps

  1. Audit current prompts for most redundant sections
  2. Replace verbose descriptions with principle references
  3. Design 2-3 DSL templates for common workflows
  4. Test with critical paths

This approach transforms prompt engineering from art to science—reducing costs while improving output quality.


Additional Context

The specific principles, paradigms, or conventions mentioned are not exhaustive or exclusive—they are examples of a broader pattern. You can search for and adopt any well-established, widely recognized design principles, methodologies, or standards relevant to your domain (e.g., "separation of concerns," "immutable architecture," "12-factor app"). The key is leveraging consensus-based constraints that exist within the LLM's training corpus. This approach reduces rule conflicts inherent in human language and minimizes token waste caused by over-explanation.

Additionally, custom DSLs can be co-designed with the LLM itself. After several rounds of discussion, you can instruct the LLM to summarize the conversation using a mutually agreed-upon DSL format. This practice effectively filters noise and compresses context, saving tokens for subsequent interactions.

d0lwl0b avatar Dec 27 '25 13:12 d0lwl0b