Add RAG modes and strengthen strict mode
This PR adds RAG mode control via the RAG_MODE environment variable, giving users clear control over how the RAG proxy balances document retrieval with general AI knowledge.
RAG Modes
Two operational modes are provided:
-
strict: Document-only responses, refuses general knowledge queries
- Use case: Compliance, legal, private/sensitive data
- Behavior: Answers ONLY from retrieved documents, says "I don't know" for anything else
-
augment (default): Freely combines documents with general AI knowledge
- Use case: General assistant with access to local documents
- Behavior: Uses documents when relevant, supplements with general knowledge when helpful
Usage
# Strict mode (documents only)
ramalama serve --env RAG_MODE=strict --rag /path/to/db model
# Augment mode (documents + general knowledge, default)
ramalama serve --env RAG_MODE=augment --rag /path/to/db model
Implementation
- Simple if/else logic for mode-specific system prompts
- Each mode has distinct instructions controlling RAG behavior
- Default mode is
augmentifRAG_MODEis not set
Testing
E2E tests included for both modes with positive/negative test cases:
- Strict mode: Correctly refuses general knowledge, answers from documents
- Augment mode: Answers both document and general knowledge queries
Tests are designed for models ≥7B parameters (e.g., deepseek-r1:14b, mistral:7b) which provide reliable retrieval and extraction.
Container Changes Required
This PR requires the RAG container to include the updated rag_framework script. The container image needs to be rebuilt with the changes from this branch.
Reviewer's guide (collapsed on small PRs)
Reviewer's Guide
Implements configurable RAG operation modes (strict, hybrid, augment) in the rag_framework script and strengthens the strict mode prompt to rely only on retrieved document content while preserving existing augment behavior as the default.
File-Level Changes
| Change | Details | Files |
|---|---|---|
| Add configurable RAG modes (strict, hybrid, augment) to control how document retrieval is balanced with general model knowledge. |
|
container-images/scripts/rag_framework |
| Strengthen strict mode behavior and prompt to enforce document-only answers and reduce hallucinations. |
|
container-images/scripts/rag_framework |
| Add hybrid mode behavior that prefers document answers but can fall back to general knowledge with attribution of the knowledge source. |
|
container-images/scripts/rag_framework |
Tips and commands
Interacting with Sourcery
-
Trigger a new review: Comment
@sourcery-ai reviewon the pull request. - Continue discussions: Reply directly to Sourcery's review comments.
-
Generate a GitHub issue from a review comment: Ask Sourcery to create an
issue from a review comment by replying to it. You can also reply to a
review comment with
@sourcery-ai issueto create an issue from it. -
Generate a pull request title: Write
@sourcery-aianywhere in the pull request title to generate a title at any time. You can also comment@sourcery-ai titleon the pull request to (re-)generate the title at any time. -
Generate a pull request summary: Write
@sourcery-ai summaryanywhere in the pull request body to generate a PR summary at any time exactly where you want it. You can also comment@sourcery-ai summaryon the pull request to (re-)generate the summary at any time. -
Generate reviewer's guide: Comment
@sourcery-ai guideon the pull request to (re-)generate the reviewer's guide at any time. -
Resolve all Sourcery comments: Comment
@sourcery-ai resolveon the pull request to resolve all Sourcery comments. Useful if you've already addressed all the comments and don't want to see them anymore. -
Dismiss all Sourcery reviews: Comment
@sourcery-ai dismisson the pull request to dismiss all existing Sourcery reviews. Especially useful if you want to start fresh with a new review - don't forget to comment@sourcery-ai reviewto trigger a new review!
Customizing Your Experience
Access your dashboard to:
- Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others.
- Change the review language.
- Add, remove or edit custom review instructions.
- Adjust other review settings.
Getting Help
- Contact our support team for questions or feedback.
- Visit our documentation for detailed guides and information.
- Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.
Summary of Changes
Hello @csoriano2718, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the RAG (Retrieval Augmented Generation) framework by introducing configurable operational modes. These modes allow users to precisely control the balance between relying solely on retrieved documents and leveraging the AI's general knowledge, addressing previous limitations where users had to choose between document-only or general knowledge responses. The update also fortifies the "strict" mode to prevent AI hallucinations and ensure responses are strictly grounded in provided data.
Highlights
-
Introduction of RAG Modes: Three new operational modes (
strict,hybrid,augment) have been added to provide granular control over how the system balances document retrieval with general AI knowledge. -
Strengthened Strict Mode: The
strictRAG mode has been significantly enhanced with a more robust prompt, explicitly forbidding the use of general knowledge and requiring an "I don't know" response if information is not explicitly in the provided documents. -
Dynamic System Prompt Generation: The system now dynamically generates the system prompt based on the chosen
RAG_MODEenvironment variable, tailoring the AI's behavior to the desired operational style.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
I really like the idea!
I have one comment in the near future, this rag pipeline can be added as an MCP server tool where the model can decide if it needs to use it or not! So we wouldn't need the strict vs augment functionality as we will follow an agentic workflow at that point
But until then, this should work!
/gemini review
@bmahabirbu ah that's a great idea, doing RAG as an MCP server.
I wonder, should we not implement this PR to avoid exposing new features/APIs in Ramalama that in the future Ramalama plans to recommend using MCP servers for? Or what's the approach of breaking past functionality to allow doing better in the future? I didn't intend to implement something that will slow down Ramalama development.
/gemini review
/gemini review
/gemini review
ok, I think Gemini and Cursor have reach an agreement now :-)
If we are going to add this feature it needs to be documented in a man page.
Might also want to have a setting in ramalama.conf
Failing Lint and you should squash and sign your commits.