holmesgpt icon indicating copy to clipboard operation
holmesgpt copied to clipboard

Investigate evals

Open nherment opened this issue 7 months ago • 2 comments

Summary by CodeRabbit

  • New Features

    • Added numerous new test fixture and configuration files to enhance test coverage for Kubernetes, tracing, and monitoring scenarios.
    • Introduced options to enable or disable specific toolsets in test configurations.
  • Bug Fixes

    • Updated test fixtures and expected outputs to reflect changes in Kubernetes cluster states, error messages, and evaluation success rates.
  • Documentation

    • Enhanced test case files with comments reporting evaluation success rates and reliability notes.
  • Chores

    • Added support for repeating test cases based on an environment variable to improve test iteration flexibility.
    • Improved test logging by including detailed output and error information for better traceability.

nherment avatar May 29 '25 10:05 nherment

Walkthrough

This update introduces new and revised test fixtures, configuration files, and test logic enhancements across several test scenarios related to Kubernetes investigations and distributed tracing. It adds new fixture files, updates existing ones for consistency and accuracy, removes obsolete files, and modifies test runners to support repeated test iterations via an environment variable. Logging and output details are also improved within the test functions.

Changes

Files/Groups Change Summary
tests/llm/fixtures/test_investigate/01_oom_kill/test_case.yaml, .../02_crashloop_backoff/ Added comment lines indicating 100% success rate for 100 evaluations; no functional changes.
tests/llm/fixtures/test_investigate/03_cpu_throttling/ (multiple new/updated files) Added/updated numerous Kubernetes tool output fixtures (describe, get, logs, events, memory requests, top nodes/pods, etc.), changed output formats to structured JSON, updated runtime environment data, added error scenarios, and introduced new config files such as toolsets.yaml.
tests/llm/fixtures/test_investigate/04_image_pull_backoff/test_case.yaml, .../08_memory_pressure/, .../09_high_latency/ Added comment lines indicating 100% success rate for 100 evaluations; no functional or structural changes.
tests/llm/fixtures/test_investigate/05_crashpod/test_case.yaml Added generate_mocks: True and comments noting unreliability unless Prometheus metrics are disabled; reported 100% success rate.
tests/llm/fixtures/test_investigate/05_crashpod_LOKI/ (new files) Added new fixtures for configuration changes and toolset disabling Prometheus metrics; appended success rate comment to test case.
tests/llm/fixtures/test_investigate/06_job_failure/test_case.yaml Added comments about unreliability due to unmocked tool calls and a 70% success rate.
tests/llm/fixtures/test_investigate/07_job_syntax_error/test_case.yaml Added comments about unreliability, tool call failures, and a 99% success rate.
tests/llm/fixtures/test_investigate/10_kube_controller_manager_down/test_case.yaml Added comments about unreliability and 0% success rate.
tests/llm/fixtures/test_investigate/11_KubeDeploymentReplicasMismatch/ (many files) Major overhaul: added new node, deployment, pod, lineage, and event fixtures; removed outdated pod/deployment logs and describe outputs; updated events to reflect different pods and scheduling failures; added toolset config disabling Prometheus metrics; simplified expected output in test case; appended comments about success rate and mock generation.
tests/llm/fixtures/test_investigate/12_KubePodCrashLooping/test_case.yaml Added generate_mocks: False and a success rate comment.
tests/llm/fixtures/test_investigate/13_KubePodNotReady/ (new files) Added fixtures for configuration changes and error scenarios for previous logs; updated test case with mock generation and success rate comment.
tests/llm/fixtures/test_investigate/14_Watchdog/test_case.yaml Added generate_mocks: False and a 97% success rate comment.
tests/llm/fixtures/test_investigate/15_tempo/ (many files) Added/updated fixtures for configuration changes, Tempo traces, deployments, pods, logs, lineage, memory requests, and metrics; removed unused/empty files; added toolset config disabling Prometheus metrics; appended comments about dependencies and success rates; updated test case with mock generation and reliability notes.
tests/llm/test_ask_holmes.py Enhanced test runner: supports repeated test iterations via ITERATIONS env var, improved logging for tool call errors, updated function signature with type annotations, removed conditional test skip, and improved debug output and assertion clarity.
tests/llm/test_investigate.py Modified test runner to support repeated test iterations via ITERATIONS env var and enhanced span logging with rationale output for evaluation.

Sequence Diagram(s)

sequenceDiagram
    participant TestRunner as Test Runner
    participant Env as Environment
    participant Fixtures as Test Fixtures
    participant LLM as LLM/Investigation Logic

    Note over TestRunner: Test iteration logic (new/updated)
    Env->>TestRunner: Provide ITERATIONS env var
    TestRunner->>TestRunner: Repeat test cases N times (if ITERATIONS set)
    loop For each test case
        TestRunner->>Fixtures: Load/prepare test case data and mocks
        TestRunner->>LLM: Run investigation or ask_holmes
        LLM->>TestRunner: Return result, rationale, tool call info
        TestRunner->>TestRunner: Log tool call errors and rationale
        TestRunner->>TestRunner: Evaluate correctness and assert thresholds
    end
sequenceDiagram
    participant TestCase as Test Case YAML
    participant Tester as Test Runner
    participant Output as Output Logger

    Note over TestCase: New/updated "generate_mocks" and comments
    TestCase->>Tester: Provide evaluation config (generate_mocks, comments)
    Tester->>Output: Log evaluation rationale and success rate
sequenceDiagram
    participant Tool as Kubernetes Tool/Fixture
    participant Test as Test Runner

    Note over Tool: Structured JSON output and error scenarios (new/updated)
    Test->>Tool: Invoke tool (e.g., kubectl, fetch_traces)
    Tool->>Test: Return structured JSON output or error
    Test->>Test: Parse and handle output or error for evaluation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

coderabbitai[bot] avatar May 29 '25 10:05 coderabbitai[bot]

Results of HolmesGPT evals

Test suite Test case Status
ask_holmes 01_how_many_pods :warning:
ask_holmes 02_what_is_wrong_with_pod :white_check_mark:
ask_holmes 02_what_is_wrong_with_pod_LOKI :white_check_mark:
ask_holmes 03_what_is_the_command_to_port_forward :white_check_mark:
ask_holmes 04_related_k8s_events :white_check_mark:
ask_holmes 05_image_version :white_check_mark:
ask_holmes 06_explain_issue :white_check_mark:
ask_holmes 07_high_latency :white_check_mark:
ask_holmes 07_high_latency_LOKI :white_check_mark:
ask_holmes 08_sock_shop_frontend :white_check_mark:
ask_holmes 09_crashpod :white_check_mark:
ask_holmes 10_image_pull_backoff :white_check_mark:
ask_holmes 11_init_containers :white_check_mark:
ask_holmes 12_job_crashing :white_check_mark:
ask_holmes 12_job_crashing_CORALOGIX :white_check_mark:
ask_holmes 12_job_crashing_LOKI :white_check_mark:
ask_holmes 13_pending_node_selector :white_check_mark:
ask_holmes 14_pending_resources :white_check_mark:
ask_holmes 15_failed_readiness_probe :white_check_mark:
ask_holmes 16_failed_no_toolset_found :white_check_mark:
ask_holmes 17_oom_kill :white_check_mark:
ask_holmes 18_crash_looping_v2 :white_check_mark:
ask_holmes 19_detect_missing_app_details :white_check_mark:
ask_holmes 20_long_log_file_search :white_check_mark:
ask_holmes 20_long_log_file_search_LOKI :white_check_mark:
ask_holmes 21_job_fail_curl_no_svc_account :warning:
ask_holmes 22_high_latency_dbi_down :warning:
ask_holmes 23_app_error_in_current_logs :white_check_mark:
ask_holmes 23_app_error_in_current_logs_LOKI :white_check_mark:
ask_holmes 24_misconfigured_pvc :white_check_mark:
ask_holmes 25_misconfigured_ingress_class :warning:
ask_holmes 26_multi_container_logs :warning:
ask_holmes 27_permissions_error_no_helm_tools :warning:
ask_holmes 28_permissions_error_helm_tools_enabled :white_check_mark:
ask_holmes 29_events_from_alert_manager :white_check_mark:
ask_holmes 30_basic_promql_graph_cluster_memory :white_check_mark:
ask_holmes 31_basic_promql_graph_pod_memory :white_check_mark:
ask_holmes 32_basic_promql_graph_pod_cpu :white_check_mark:
ask_holmes 33_http_latency_graph :white_check_mark:
ask_holmes 34_memory_graph :white_check_mark:
ask_holmes 35_tempo :white_check_mark:
ask_holmes 36_argocd_find_resource :white_check_mark:
ask_holmes 37_argocd_wrong_namespace :warning:
ask_holmes 38_rabbitmq_split_head :white_check_mark:
ask_holmes 39_failed_toolset :white_check_mark:
ask_holmes 40_disabled_toolset :white_check_mark:
ask_holmes 41_setup_argo :white_check_mark:
investigate 01_oom_kill :white_check_mark:
investigate 02_crashloop_backoff :white_check_mark:
investigate 03_cpu_throttling :white_check_mark:
investigate 04_image_pull_backoff :white_check_mark:
investigate 05_crashpod :white_check_mark:
investigate 05_crashpod_LOKI :white_check_mark:
investigate 06_job_failure :white_check_mark:
investigate 07_job_syntax_error :white_check_mark:
investigate 08_memory_pressure :white_check_mark:
investigate 09_high_latency :white_check_mark:
investigate 10_kube_controller_manager_down :warning:
investigate 11_KubeDeploymentReplicasMismatch :white_check_mark:
investigate 12_KubePodCrashLooping :white_check_mark:
investigate 13_KubePodNotReady :white_check_mark:
investigate 14_Watchdog :white_check_mark:
investigate 15_tempo :white_check_mark:

Legend

  • :white_check_mark: the test was successful
  • :warning: the test failed but is known to be flakky or known to fail
  • :x: the test failed and should be fixed before merging the PR

github-actions[bot] avatar Jun 02 '25 05:06 github-actions[bot]