Implement semantic caching for code parsing
Summary
- Implemented content-based hashing to optimize LLM usage by reusing docstrings for identical code
- Added semantic caching system that works across branches and repositories
- Improved docstring generation efficiency with cache hit rate metrics
Implementation Details
- Computes SHA-256 hashes based on node name + code content during parsing
- Stores hashes in Neo4j graph with proper indices for fast retrieval
- Creates content hash-based lookups before LLM inference
- Reuses existing docstrings, tags, and embeddings for identical code
- Preserves hashes when duplicating graphs between repositories
- Adds detailed logging and metrics for cache hit rate performance
Performance Improvement
- Achieves 65-70% cache hit rate when parsing similar branches
- Significantly reduces LLM API calls and processing time
- Uses existing Neo4j infrastructure without additional dependencies
Testing
Tested on multiple branches of the same repository with minor code changes. Cache hit rate increased with each additional branch parsed as expected.
🤖 Generated with Claude Code
Summary by CodeRabbit
-
Documentation
- Introduced a new set of development guidelines to standardize best practices across the project.
-
New Features
- Enhanced graph processing with semantic caching for improved performance and data consistency.
- Updated node duplication and inference logic to leverage unique content identifiers, reducing redundant computations and streamlining data handling.
Walkthrough
This pull request introduces a new guidelines document, CLAUDE.md, that details development practices and conventions for the Momentum Server project. In addition, several modules now incorporate a content hashing mechanism for nodes. The CodeGraphService generates a SHA-256 hash for node data, and the ParsingService propagates this hash during graph duplication. Updates to the knowledge graph include new attributes in the schema and enhancements in the caching flow of the inference service, which now leverages the content hash to reduce redundant language model calls.
Changes
| File(s) | Change Summary |
|---|---|
CLAUDE.md |
New guidelines document outlining setup, build commands, coding style, error handling, and dependency requirements for the project. |
app/modules/parsing/graph_construction/code_graph_service.py app/modules/parsing/graph_construction/parsing_service.py |
Introduced content hashing: added generate_content_hash static method and updated graph creation/storage and duplication queries to include a content_hash field. |
app/modules/parsing/knowledge_graph/inference_schema.py app/modules/parsing/knowledge_graph/inference_service.py |
Expanded the DocstringRequest schema with content_hash and name attributes and enhanced caching in inference functions by adding a new find_cached_nodes method and updating generate_docstrings and batch_nodes to manage semantic caching. |
Sequence Diagram(s)
sequenceDiagram
participant Client as API Client
participant IS as InferenceService
participant DB as Database/Cache
participant LLM as Language Model
Client->>IS: Request generate_docstrings(repo_id)
IS->>DB: Call find_cached_nodes(content_hashes)
alt Cache Hit
DB-->>IS: Return cached node data
IS->>Client: Return cached docstrings
else Cache Miss
IS->>DB: Query nodes missing content_hash
Note right of IS: Compute SHA-256 for each node
IS->>LLM: Request docstring generation
LLM-->>IS: Return generated docstrings
IS->>DB: Update cache with new content_hashes
IS->>Client: Return new docstrings
end
sequenceDiagram
participant Node as Node Data
participant CS as CodeGraphService
participant DB as Graph Database
Node->>CS: Provide node name and text
CS->>CS: Normalize inputs & compute SHA-256 hash
alt Text is empty
CS-->>Node: Return None as content_hash
else
CS-->>Node: Return computed content_hash
end
CS->>DB: Store node with content_hash
Possibly related PRs
- potpie-ai/potpie#210: Introduces similar modifications in the graph creation process, especially around using a content hash for improved node handling in the
CodeGraphService.
Poem
I'm a bunny hopping through fields of code,
Where hashes hide like treasures in a burrow mode.
New guidelines light our path so clear,
Caching magic now brings our purpose near.
With each bound and byte, I celebrate our code's delight! 🐇🥕
✨ Finishing Touches
- [ ] 📝 Generate Docstrings
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Generate unit testing code for this file.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Quality Gate passed
Issues
4 New issues
0 Accepted issues
Measures
0 Security Hotspots
0.0% Coverage on New Code
0.0% Duplication on New Code