chatgpt-cli icon indicating copy to clipboard operation
chatgpt-cli copied to clipboard

Array-Based Configuration for LLM Providers with Target Flag

Open kardolus opened this issue 4 months ago • 0 comments

Problem

Currently, the configuration for the ChatGPT CLI uses a singular configuration format for LLMs and models. This works well if you're using only one LLM at a time, but it becomes cumbersome when switching between different providers (OpenAI, Perplexity, Llama, etc.) or models.

For example, the current configuration looks like this:

name: openai
api_key: 
model: gpt-4o
max_tokens: 4096
context_window: 8192
...

When switching between providers or models, users have to either edit the configuration file or rely on environment variables. This makes switching between multiple setups more complicated and less efficient.

Proposed Solution

  1. Introduce Array-Based Configuration for LLMs: Convert the current configuration from a single setup to an array-based format where multiple configurations for different LLMs and models can be stored.

    Example:

    providers:
      - name: openai
        api_key: 
        model: gpt-4o
        max_tokens: 4096
        context_window: 8192
        ...
      - name: llama
        api_key: 
        model: llama-2-13b-chat
        max_tokens: 4096
        context_window: 8192
        ...
      - name: perplexity
        api_key: 
        model: llama-3.1-sonar
        max_tokens: 4096
        context_window: 8192
        ...
    
  2. Add a --target Flag to Dynamically Select a Configuration: Add a --target flag that allows users to select which configuration (provider and model) to use for a specific command.

    Example:

    chatgpt --target openai "Who is Max Verstappen?"
    chatgpt --target llama "Tell me a joke"
    chatgpt --target perplexity "Summarize this article"
    

    This way, users can quickly switch between configurations without having to edit the config.yaml file or rely on environment variables.

Benefits

  • Ease of Use: With array-based configurations and the --target flag, users can easily switch between LLM providers and models.
  • Cleaner Configuration: Avoids the need for multiple environment variables or manual configuration file edits for each LLM.
  • Better Flexibility: Supports different LLM providers (OpenAI, Llama, Perplexity, etc.) and models without requiring reconfiguration.

kardolus avatar Oct 02 '24 22:10 kardolus