Investigate evals
Summary by CodeRabbit
-
New Features
- Added numerous new test fixture and configuration files to enhance test coverage for Kubernetes, tracing, and monitoring scenarios.
- Introduced options to enable or disable specific toolsets in test configurations.
-
Bug Fixes
- Updated test fixtures and expected outputs to reflect changes in Kubernetes cluster states, error messages, and evaluation success rates.
-
Documentation
- Enhanced test case files with comments reporting evaluation success rates and reliability notes.
-
Chores
- Added support for repeating test cases based on an environment variable to improve test iteration flexibility.
- Improved test logging by including detailed output and error information for better traceability.
Walkthrough
This update introduces new and revised test fixtures, configuration files, and test logic enhancements across several test scenarios related to Kubernetes investigations and distributed tracing. It adds new fixture files, updates existing ones for consistency and accuracy, removes obsolete files, and modifies test runners to support repeated test iterations via an environment variable. Logging and output details are also improved within the test functions.
Changes
| Files/Groups | Change Summary |
|---|---|
tests/llm/fixtures/test_investigate/01_oom_kill/test_case.yaml, .../02_crashloop_backoff/ |
Added comment lines indicating 100% success rate for 100 evaluations; no functional changes. |
tests/llm/fixtures/test_investigate/03_cpu_throttling/ (multiple new/updated files) |
Added/updated numerous Kubernetes tool output fixtures (describe, get, logs, events, memory requests, top nodes/pods, etc.), changed output formats to structured JSON, updated runtime environment data, added error scenarios, and introduced new config files such as toolsets.yaml. |
tests/llm/fixtures/test_investigate/04_image_pull_backoff/test_case.yaml, .../08_memory_pressure/, .../09_high_latency/ |
Added comment lines indicating 100% success rate for 100 evaluations; no functional or structural changes. |
tests/llm/fixtures/test_investigate/05_crashpod/test_case.yaml |
Added generate_mocks: True and comments noting unreliability unless Prometheus metrics are disabled; reported 100% success rate. |
tests/llm/fixtures/test_investigate/05_crashpod_LOKI/ (new files) |
Added new fixtures for configuration changes and toolset disabling Prometheus metrics; appended success rate comment to test case. |
tests/llm/fixtures/test_investigate/06_job_failure/test_case.yaml |
Added comments about unreliability due to unmocked tool calls and a 70% success rate. |
tests/llm/fixtures/test_investigate/07_job_syntax_error/test_case.yaml |
Added comments about unreliability, tool call failures, and a 99% success rate. |
tests/llm/fixtures/test_investigate/10_kube_controller_manager_down/test_case.yaml |
Added comments about unreliability and 0% success rate. |
tests/llm/fixtures/test_investigate/11_KubeDeploymentReplicasMismatch/ (many files) |
Major overhaul: added new node, deployment, pod, lineage, and event fixtures; removed outdated pod/deployment logs and describe outputs; updated events to reflect different pods and scheduling failures; added toolset config disabling Prometheus metrics; simplified expected output in test case; appended comments about success rate and mock generation. |
tests/llm/fixtures/test_investigate/12_KubePodCrashLooping/test_case.yaml |
Added generate_mocks: False and a success rate comment. |
tests/llm/fixtures/test_investigate/13_KubePodNotReady/ (new files) |
Added fixtures for configuration changes and error scenarios for previous logs; updated test case with mock generation and success rate comment. |
tests/llm/fixtures/test_investigate/14_Watchdog/test_case.yaml |
Added generate_mocks: False and a 97% success rate comment. |
tests/llm/fixtures/test_investigate/15_tempo/ (many files) |
Added/updated fixtures for configuration changes, Tempo traces, deployments, pods, logs, lineage, memory requests, and metrics; removed unused/empty files; added toolset config disabling Prometheus metrics; appended comments about dependencies and success rates; updated test case with mock generation and reliability notes. |
tests/llm/test_ask_holmes.py |
Enhanced test runner: supports repeated test iterations via ITERATIONS env var, improved logging for tool call errors, updated function signature with type annotations, removed conditional test skip, and improved debug output and assertion clarity. |
tests/llm/test_investigate.py |
Modified test runner to support repeated test iterations via ITERATIONS env var and enhanced span logging with rationale output for evaluation. |
Sequence Diagram(s)
sequenceDiagram
participant TestRunner as Test Runner
participant Env as Environment
participant Fixtures as Test Fixtures
participant LLM as LLM/Investigation Logic
Note over TestRunner: Test iteration logic (new/updated)
Env->>TestRunner: Provide ITERATIONS env var
TestRunner->>TestRunner: Repeat test cases N times (if ITERATIONS set)
loop For each test case
TestRunner->>Fixtures: Load/prepare test case data and mocks
TestRunner->>LLM: Run investigation or ask_holmes
LLM->>TestRunner: Return result, rationale, tool call info
TestRunner->>TestRunner: Log tool call errors and rationale
TestRunner->>TestRunner: Evaluate correctness and assert thresholds
end
sequenceDiagram
participant TestCase as Test Case YAML
participant Tester as Test Runner
participant Output as Output Logger
Note over TestCase: New/updated "generate_mocks" and comments
TestCase->>Tester: Provide evaluation config (generate_mocks, comments)
Tester->>Output: Log evaluation rationale and success rate
sequenceDiagram
participant Tool as Kubernetes Tool/Fixture
participant Test as Test Runner
Note over Tool: Structured JSON output and error scenarios (new/updated)
Test->>Tool: Invoke tool (e.g., kubectl, fetch_traces)
Tool->>Test: Return structured JSON output or error
Test->>Test: Parse and handle output or error for evaluation
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Explain this complex logic.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai explain this code block.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and explain its main purpose.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Support
Need help? Create a ticket on our support page for assistance with any issues or questions.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR.@coderabbitai generate sequence diagramto generate a sequence diagram of the changes in this PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Results of HolmesGPT evals
- ask_holmes: 40/47 test cases were successful
- investigate: 15/16 test cases were successful
Legend
- :white_check_mark: the test was successful
- :warning: the test failed but is known to be flakky or known to fail
- :x: the test failed and should be fixed before merging the PR