WrenAI icon indicating copy to clipboard operation
WrenAI copied to clipboard

Keep Prompts as files on a filesystem separately from the code for easy Prompt Engineering

Open qdrddr opened this issue 11 months ago • 7 comments

Is your feature request related to a problem? Please describe. Updating prompts is problematic because the prompts are hardcoded in the Python code. And requires extra steps to fix.

Describe the solution you'd like Place Prompts into files in the docker container in a dedicated folder so they can be easily mapped and substituted with customized prompts.

Describe alternatives you've considered Currently requires modifying Python code and re-mapping updated code in the container.

Additional context This is especially needed flexibility when you want to adjust prompts for models other than OpenAI's.

qdrddr avatar Jan 28 '25 02:01 qdrddr

Quick questions regarding this (sorry if this isn't the appropriate place!): where exactly would one change the code to edit the prompts? And is there an easier way to perform prompt engineering if we are using OpenAI's LLM? Thak you!

lenongsa avatar Feb 10 '25 17:02 lenongsa

Massage dew point

jaall21 avatar Feb 10 '25 19:02 jaall21

if this isn't the appropriate place!): where exactly would one change the code to edit the prompts? And is there an easier way to perform prompt engineering if we are using OpenAI's LLM? Thak you

Currently, prompts are baked into the container image so if you change them, after each container reboot, the changes will be lost. So to keep changes permanent you must mount the folder with the prompts (copy prompts into this folder) to your local host or if in k8s then to configMaps.

qdrddr avatar Feb 14 '25 20:02 qdrddr

I tried to use claude.ai to draft my idea

import os
import logging
from typing import Dict, Optional, Any

class TemplateManager:
    """Central manager for all prompt templates in the application."""
    
    # Default prompt name constants
    TABLE_COLUMNS_SELECTION = "table_columns_selection_user_prompt_template"
    SQL_GENERATION = "sql_generation_prompt_template"
    SCHEMA_ANALYSIS = "schema_analysis_prompt_template"
    ERROR_CORRECTION = "error_correction_prompt_template"
    # Add all other prompt constants here
    
    def __init__(self, template_dir: str = "/app/templates", default_model: str = "default"):
        """
        Initialize the template manager.
        
        Args:
            template_dir: Base directory containing all templates
            default_model: The default model's template directory to use
        """
        self.template_dir = template_dir
        self.default_model = default_model
        self.current_model = default_model
        self.logger = logging.getLogger(__name__)
        self.templates: Dict[str, str] = {}
        
        # Define required prompts
        self.required_prompts = [
            self.TABLE_COLUMNS_SELECTION,
            self.SQL_GENERATION,
            self.SCHEMA_ANALYSIS,
            self.ERROR_CORRECTION,
            # Add all other required prompts here
        ]
        
        # Validate and load templates
        self._validate_template_directory()
        self._load_templates()
        
    def _validate_template_directory(self) -> None:
        """Validate template directory exists."""
        if not os.path.exists(self.template_dir):
            error_msg = f"Template directory not found: {self.template_dir}"
            self.logger.error(error_msg)
            raise FileNotFoundError(error_msg)
            
        default_dir = os.path.join(self.template_dir, self.default_model)
        if not os.path.exists(default_dir):
            error_msg = f"Default template model directory not found: {default_dir}"
            self.logger.error(error_msg)
            raise FileNotFoundError(error_msg)
    
    def _load_templates(self) -> None:
        """Load all templates from the current model directory with fallback to default."""
        model_dir = os.path.join(self.template_dir, self.current_model)
        default_dir = os.path.join(self.template_dir, self.default_model)
        
        # Clear existing templates
        self.templates = {}
        
        # Check required prompts exist
        missing_prompts = []
        for prompt_name in self.required_prompts:
            current_model_path = os.path.join(model_dir, f"{prompt_name}.txt")
            default_model_path = os.path.join(default_dir, f"{prompt_name}.txt")
            
            if os.path.exists(current_model_path):
                with open(current_model_path, 'r', encoding='utf-8') as f:
                    self.templates[prompt_name] = f.read()
            elif os.path.exists(default_model_path):
                with open(default_model_path, 'r', encoding='utf-8') as f:
                    self.templates[prompt_name] = f.read()
                self.logger.info(f"Template '{prompt_name}' not found for model '{self.current_model}', using default")
            else:
                missing_prompts.append(prompt_name)
        
        if missing_prompts:
            error_msg = f"Required templates missing: {', '.join(missing_prompts)}"
            self.logger.error(error_msg)
            raise ValueError(error_msg)
    
    def set_model(self, model_name: str) -> None:
        """Change the current model for template loading."""
        model_dir = os.path.join(self.template_dir, model_name)
        if os.path.exists(model_dir):
            self.current_model = model_name
            self._load_templates()
            self.logger.info(f"Switched template model to: {model_name}")
        else:
            self.logger.warning(f"Template directory for model '{model_name}' not found. Using default.")
            self.current_model = self.default_model
    
    def get_prompt(self, prompt_name: str) -> str:
        """
        Get a prompt by name.
        
        Args:
            prompt_name: Name of the prompt template
            
        Returns:
            Prompt string
        """
        if prompt_name not in self.templates:
            error_msg = f"Prompt template '{prompt_name}' not found"
            self.logger.error(error_msg)
            raise ValueError(error_msg)
            
        return self.templates[prompt_name]
    
    # Expose prompt templates as properties for easy access
    @property
    def table_columns_selection_user_prompt_template(self) -> str:
        return self.get_prompt(self.TABLE_COLUMNS_SELECTION)
    
    @property
    def sql_generation_prompt_template(self) -> str:
        return self.get_prompt(self.SQL_GENERATION)
    
    @property
    def schema_analysis_prompt_template(self) -> str:
        return self.get_prompt(self.SCHEMA_ANALYSIS)
    
    @property
    def error_correction_prompt_template(self) -> str:
        return self.get_prompt(self.ERROR_CORRECTION)
    
    # Add other properties for all prompts

This implementation addresses your points:

  1. Templates are treated as strings, not rendered (no Jinja2)
  2. Added default prompt name constants and property accessors so you can use template_manager.table_columns_selection_user_prompt_template
  3. The manager validates templates on startup and throws exceptions for missing required templates

The file structure would be:

/app/templates/
├── default/
│   ├── table_columns_selection_user_prompt_template.txt
│   ├── sql_generation_prompt_template.txt
│   ├── schema_analysis_prompt_template.txt
│   └── ...
├── anthropic/
│   ├── table_columns_selection_user_prompt_template.txt
│   └── ...
└── llama/
    └── ...

And usage would be:

# Initialize the template manager
template_manager = TemplateManager()

# In your SQL generation code
def generate_sql(schema, question, model="gpt-4"):
    # Set appropriate template model based on LLM being used
    if "claude" in model.lower():
        template_manager.set_model("anthropic")
    
    # Get the prompt template as a string
    prompt = template_manager.sql_generation_prompt_template
    
    # Format the prompt with your variables
    formatted_prompt = prompt.format(
        schema=schema,
        question=question
    )
    
    # Use the formatted prompt with your LLM
    # ...

4. Docker Configuration

Update your Docker setup to include volume mapping for templates:

# In docker-compose.yml
services:
  wrenai:
    # ... other configuration
    volumes:
      - ./templates:/app/templates

This allows users to modify templates without rebuilding the container.

5. Migration Strategy

To migrate existing hardcoded prompts to this new system:

  1. Identify all prompt strings in your codebase
  2. Convert each to a template file
  3. Replace hardcoded strings with template_manager calls
  4. Provide documentation for users on how to customize templates

Would you like me to focus on any specific aspect of this solution in more detail?

6. Improvment

  1. we need to restart after modify any template file.
  2. do we need to support any non required templates?
  3. Do we need to validate the template format by jinja2?

xuayan-nokia avatar Feb 26 '25 16:02 xuayan-nokia

Langfuse has prompt CMS:

Image

tedyyan avatar Mar 16 '25 15:03 tedyyan

Langfuse maintainer here, let me know in case I can help in any way

marcklingen avatar Mar 17 '25 10:03 marcklingen

@marcklingen prompt CMS sounds like an interesting idea. Is this functionality available in Open Source Version of LangFuse? Also does this integrate in any way with git for version controll?

Thanks.

qdrddr avatar Apr 05 '25 22:04 qdrddr