Code Duplication & Maintainability Problems in Prompt Generators
The current implementation of generate_mcp_system_prompt, generate_no_mcp_system_prompt, generate_agent_specific_system_prompt, and generate_agent_summarize_prompt contains extensive code duplication. Large blocks of boilerplate text are repeated across functions, making it difficult to maintain and extend. Adding a new agent type or modifying instructions requires editing multiple places in the code.
Problems Identified:
Repetition of Long Strings:
Many identical or near-identical text blocks (e.g., cautionary instructions, task strategies) are repeated across functions.
Hard to update consistently.
Scalability Issue:
Adding new agent types requires modifying multiple if/elif branches instead of a single mapping/dictionary.
High risk of introducing inconsistencies.
Poor Separation of Concerns:
Long instructional strings are hardcoded in Python functions rather than stored in reusable constants, templates, or even external files.
Readability & Maintainability:
Lack of type hints and docstrings reduces clarity.
JSON schemas for tools are dumped inline, which can reduce readability when large.
Suggested Improvements:
Refactor into reusable templates: Use base templates and format with variables instead of repeating blocks.
Use dictionary mappings instead of long if/elif: Map agent types (main, agent-browsing, etc.) to their specific template text.
Extract long instructional text into constants or Markdown template files: Keep Python code lean and focused on logic, not large strings.
Pretty-print JSON schemas with json.dumps(..., indent=2): For readability.
Add type hints & docstrings: Improve IDE support and maintainability.
Improve error handling: When raising ValueError, list available agent types to help debugging.
Thanks for flagging this!
Here’s a concrete proposal for the refactor path:
Create shared base templates for repeated instruction blocks
` BASE_OBJECTIVE = """# General Objective You accomplish a given task iteratively, breaking it down into clear steps and working through them methodically. """
CAUTION_BLOCK = """Be cautious and transparent in your output:
- Always return the result of the task. If the task cannot be solved, say so clearly.
- If more context is needed, return a clarification request and do not proceed with tool use.
"""
Replace long if/elif with dictionary mappingAGENT_PROMPTS = { "main": "...", "agent-browsing": "...", "agent-coding": "...", "agent-reading": "...", "agent-reasoning": "..." }`
Use JSON pretty-printing for schemas
import json schema_str = json.dumps(tool["schema"], indent=2)
Add type hints & docstrings for clarity
Improve error handling:
raise ValueError( f"Unknown agent type: {agent_type}. " f"Valid options: {list(AGENT_PROMPTS.keys())}" )
Is this like some copilot robot?
Is this like some copilot robot?
Yeah, I feel like it is.
more like improve prompts and all
@jenny-miromind & @BinWang28, can you please assign this to me ?