[Feature]: Support for Configurable Langfuse Trace and Generation Parameters in Config.yaml
The Feature
Enable setting default values for langfuse's parameters such as trace_name and generation_name, for different models/providers within config.yaml
Motivation, pitch
I use the litellm proxy and langfuse to record multiple LLM providers' tokens and cost. As an LLM user utilizing third-party clients to call APIs through the litellm proxy, I have difficulty including parameters like trace_name and generation_name in each request.
It would be beneficial to allow these parameters to be configured directly in config.yaml for better logging and differentiation of provider usage and costs. Can be overridden by request parameters if provided by the user.
Twitter / LinkedIn details
No response
@ZzzzRyan can you show me how you'd want to define this on the config.yaml ?
@ishaan-jaff Maybe something like:
# ...
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/chatgpt-v-2
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
langfuse_includes:
trace_name: XXX
generation_name: XXX
# ...
- I don't feel strongly about the naming but hopefully this conveys the gist. Just including some discrete set of key/value params on a per-model basis and then just passing those to Langfuse 🤔
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.