pr-agent
pr-agent copied to clipboard
Add and document abilty to use LiteLLM Logging Observability tools
User description
Adds out-of-the-box support for setting up a logging and observability tool using LiteLLM by setting up callbacks. Should work with any of the documented callbacks.
Code uses the starlette context to track PR information to avoid complexity of passing it around everywhere. It will use a basic context when running with the CLI. Doing so allows for code simplification.
Sample LangSmith run:
#1105
See also https://github.com/BerriAI/litellm/issues/5179
PR Type
Enhancement, Documentation
Description
- Implemented LiteLLM logging and observability support:
- Added success, failure, and service callbacks configuration
- Included metadata in chat completions for various observability platforms (e.g., LangFuse, LangSmith)
- Enhanced context management throughout the application:
- Added Starlette context for tracking PR information
- Simplified git provider and repo settings context handling
- Improved CLI functionality with request cycle context
- Added documentation for integrating with logging observability platforms, including example configuration for LangSmith
- Updated configuration.toml with new LiteLLM callback options (commented out by default)
Changes walkthrough 📝
| Relevant files | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Enhancement |
| ||||||||||
| Documentation |
| ||||||||||
| Configuration changes |
|
💡 PR-Agent usage: Comment
/helpon the PR to get a list of all available PR-Agent tools and their descriptions
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.
PR Reviewer Guide 🔍
| ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪ |
| 🏅 Score: 85 |
| 🧪 No relevant tests |
| 🔒 No security concerns identified |
🔀 Multiple PR themesSub-PR theme: Implement LiteLLM logging and observability supportRelevant files:
Sub-PR theme: Refactor git provider context handlingRelevant files:
Sub-PR theme: Add documentation and configuration for LiteLLM callbacksRelevant files:
|
| ⚡ Key issues to review Performance Concern Code Smell |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.
PR Code Suggestions ✨
| Category | Suggestion | Score |
| Possible bug |
Correct the condition for checking the existence of a git provider in the contextThe condition pr_agent/git_providers/init.py [41-45]
Suggestion importance[1-10]: 9Why: This suggestion fixes a potential bug in the logic for checking the existence of a git provider, which could lead to incorrect behavior. | 9 |
| Enhancement |
Refactor callback setting logic to improve maintainability and extensibilityConsider using a dictionary to map the callback types to their corresponding pr_agent/algo/ai_handlers/litellm_ai_handler.py [47-52]
Suggestion importance[1-10]: 7Why: The suggestion improves code maintainability and extensibility by using a dictionary to map callback types, making it easier to add or modify callbacks in the future. | 7 |
Simplify the metadata dictionary structure for improved readability and maintainabilityConsider using a dictionary comprehension to create the pr_agent/algo/ai_handlers/litellm_ai_handler.py [143-161]
Suggestion importance[1-10]: 5Why: While the suggestion does improve readability slightly, the existing code is already clear and the change is mostly cosmetic. | 5 | |
| Best practice |
Move the CLI mode setting inside the request cycle context for consistencyConsider moving the
Suggestion importance[1-10]: 6Why: The suggestion improves consistency by moving the CLI mode setting inside the context block, but it's a minor change that doesn't significantly impact functionality. | 6 |
@CodiumAI-Agent /review
PR Reviewer Guide 🔍
| ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪ |
| 🧪 No relevant tests |
| 🔒 No security concerns identified |
| ⚡ Key issues to review Error Handling Potential Bug Exception Handling Removal |
@CodiumAI-Agent /analyze
@MarkRx This PR cannot be merged as is. These are the needed changes:
(1) Remove any modification whatsoever to the context, or things related to it. It should not be a part of this PR.
This is a complicated var, that is used by async applications. don't change it.
(2) the instructions should move to: https://pr-agent-docs.codium.ai/usage-guide/additional_configurations/
(3) By default, everything should be turned off
(4) as a general guideline, aim for the smallest need footprint. This, for example, is a no-go. Its the default
I need place to get contextual traceability information about the request without having to pass it around everywhere that also works for the CLI. If I don't put it in the middleware request context where could I put it? It could be pulled off of the loguru context ("extra field") but that seems awkward. I could maintain some kind of command context with ContextVar.
I don't have a clean way of handling the different Observability platforms. For example,LangFuse, Helicone, and LangSmith use different fields and others are either not documented or not mature in LiteLLM. I figured I could just send them all. It's important to set these fields though as without them observability records don't provide enough information to be useful. I suppose I could build the metadata dictionary based on which callbacks are being used?
Also do there need to be 3 separate ai handlers? The additional abstraction is making this tricky.
@MarkRx
I will not approve doing these changes in the default mode, without an if that turns them on if a flag is on. They are also too versboe, and should be exported to a function (when a flag, which is off by default, is on).
I also don't like the context changes that you made. They are fragile, and require hacks for cli.
you could have passed parameters using get_settings(), or use just in pr_agent.py the existing methodology for app runs.
But all my concerns above need to be addressed. If you prefer your way, then its suitable for a fork
Updated with a different approach. Metadata is a dictionary to allow for different metadata per command.
Metadata won't be passed to LiteLLM unless a callback is defined.
@MarkRx
- This PR is still too intrusive - you are editing every tool, just for a specific option that won't be used by 99.9% of pr-agent users. It's not scalable or manageable that way. To accommodate such options, the code impact needs to be small and currently it isn't
- You are also ignoring the existing logger mechanism, which already conveys all the PR and flow data
- In addition, it does not contain a single control parameter, turned off by default:
litellm.enable_callbacks
To address all the issues, you can start from this template with your branch, and edit the add_litellm_callbacks function to accommodate the wanted logic:
https://github.com/Codium-ai/pr-agent/pull/1152/files#diff-ea1acaa0907f3410665530fbc4cda2ab524de2772e0bbe10bad4648b8be35dfeR113
With the template above, the entire code impact on the default flow is:
That is how such changes should be made. Inside the add_litellm_callbacks function you can now freely implement any logic you want
looks good
And from now on can easily edit add_litellm_callbacks function and update its content whenever you think it necessary