litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Feature]: Support for Configurable Langfuse Trace and Generation Parameters in Config.yaml

Open ZzzzRyan opened this issue 1 year ago • 3 comments

The Feature

Enable setting default values for langfuse's parameters such as trace_name and generation_name, for different models/providers within config.yaml

Motivation, pitch

I use the litellm proxy and langfuse to record multiple LLM providers' tokens and cost. As an LLM user utilizing third-party clients to call APIs through the litellm proxy, I have difficulty including parameters like trace_name and generation_name in each request.

It would be beneficial to allow these parameters to be configured directly in config.yaml for better logging and differentiation of provider usage and costs. Can be overridden by request parameters if provided by the user.

Twitter / LinkedIn details

No response

ZzzzRyan avatar Nov 15 '24 09:11 ZzzzRyan

@ZzzzRyan can you show me how you'd want to define this on the config.yaml ?

ishaan-jaff avatar Feb 07 '25 22:02 ishaan-jaff

@ishaan-jaff Maybe something like:

# ...
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: azure/chatgpt-v-2
      api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
    langfuse_includes:
      trace_name: XXX
      generation_name: XXX
# ...
  • I don't feel strongly about the naming but hopefully this conveys the gist. Just including some discrete set of key/value params on a per-model basis and then just passing those to Langfuse 🤔

ra0x3 avatar Feb 11 '25 05:02 ra0x3

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

github-actions[bot] avatar May 13 '25 00:05 github-actions[bot]