holmesgpt
holmesgpt copied to clipboard
Use Robusta AI by default (DO NOT MERGE YET)
Summary by CodeRabbit
- New Features
- Added new optional fields for model name and last updated timestamp to AI-related responses.
- Improvements
- Enhanced flexibility and fallback options for AI model selection and API key management.
- Default AI model name and activation settings are now configurable via environment variables.
- Refactor
- Consolidated and streamlined how Robusta API keys are loaded, moving from a standalone function to a configuration method.
- Tests
- Updated tests to reflect changes in how API keys are loaded and mocked.
Summary by CodeRabbit
- New Features
- Added new optional fields for model name and last updated timestamp to AI-related responses.
- Improvements
- Enhanced flexibility and fallback options for AI model selection and API key management.
- Default AI model name and activation settings are now configurable via environment variables.
- Refactor
- Consolidated and streamlined how Robusta API keys are loaded, moving from a standalone function to a configuration method.
- Introduced a centralized language model selector to unify model instantiation logic.
- Tests
- Added comprehensive tests for the new language model selector logic.
- Updated tests to reflect changes in how API keys are loaded and mocked.
Summary by CodeRabbit
- New Features
- Added new optional fields for model name and last updated timestamp to AI-related responses.
- Improvements
- Enhanced flexibility and fallback options for AI model selection and API key management.
- Default AI model name and activation settings are now configurable via environment variables.
- Refactor
- Consolidated and streamlined how Robusta API keys are loaded, moving from a standalone function to a configuration method.
- Introduced a centralized language model selector to unify model instantiation logic.
- Tests
- Added comprehensive tests for the new language model selector logic.
- Updated tests to reflect changes in how API keys are loaded and mocked.
Walkthrough
The changes refactor how the Robusta AI API key is loaded, moving from a standalone utility function to a method on the Config class. Additional environment variables and model selection logic are introduced, with new and updated fields in configuration and data models to support fallback mechanisms and timestamping. Related imports and test mocks are updated accordingly.
Changes
| File(s) | Change Summary |
|---|---|
| holmes/clients/robusta_client.py | Added optional fields robusta_ai_model_name and last_updated_at to HolmesInfo; updated fetch logic to set timestamp and include model name fallback. |
| holmes/common/env_vars.py | Changed ROBUSTA_AI default to True; added ROBUSTA_AI_MODEL_NAME_FALLBACK with default "gpt-4o". |
| holmes/config.py | Removed default model value; refactored _get_llm to use new LLMSelector class; added load_robusta_api_key method; updated imports. |
| holmes/core/investigation.py | Replaced calls to removed standalone load_robusta_api_key function with config.load_robusta_api_key method. |
| holmes/utils/robusta.py | Deleted file containing standalone load_robusta_api_key function. |
| server.py | Updated API endpoint handlers to call config.load_robusta_api_key method instead of removed utility function. |
| tests/test_server_endpoints.py | Updated patch decorators to mock Config.load_robusta_api_key method instead of removed utility function. |
| holmes/llm_selector.py | Added new LLMSelector class to centralize LLM selection logic with fallback to Robusta AI model if enabled. |
| tests/test_llm_selector.py | Added comprehensive unit tests for LLMSelector covering multiple selection scenarios and fallback logic. |
Sequence Diagram(s)
sequenceDiagram
participant Client
participant Server
participant Config
participant DAL as SupabaseDal
Client->>Server: API Request (e.g., /api/chat)
Server->>Config: load_robusta_api_key(dal)
alt ROBUSTA_AI enabled and no API key/model configured
Config->>DAL: get_ai_credentials()
DAL-->>Config: Returns credentials
Config-->>Server: Sets api_key
else Not needed
Config-->>Server: No action
end
Server->>Config: _get_llm()
Config->>LLMSelector: select_llm(model_key)
LLMSelector-->>Config: Returns LLM instance
Config-->>Server: Returns LLM instance
Server-->>Client: Response
Suggested reviewers
- moshemorad
- nherment
✨ Finishing Touches
- [ ] 📝 Generate Docstrings
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Explain this complex logic.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai explain this code block.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and explain its main purpose.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Support
Need help? Create a ticket on our support page for assistance with any issues or questions.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR.@coderabbitai generate sequence diagramto generate a sequence diagram of the changes in this PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Results of HolmesGPT evals
- ask_holmes: 39/46 test cases were successful
- investigate: 15/15 test cases were successful
Legend
- :white_check_mark: the test was successful
- :warning: the test failed but is known to be flakky or known to fail
- :x: the test failed and should be fixed before merging the PR
Closing in favor of a simpler implementation we will do soon