NeMo-Guardrails
NeMo-Guardrails copied to clipboard
feat: support langchain v1
PR Description
Summary
Adds full LangChain 1.x support while maintaining backward compatibility with 0.x versions. This PR migrates all imports to use canonical langchain-core paths, removes deprecated features (Chain support, LLMParams, SummarizeDocument), and provides documentation updates.
BREAKING CHANGES:
- Chain support removed from action dispatcher (use Runnable instead)
- SummarizeDocument built-in action removed (implement custom version)
- langchain-nvidia-ai-endpoints removed from optional dependencies
Key Changes
Dependency Updates
- Extended langchain constraint from
<0.4.0to<2.0.0 - Extended langchain-core constraint from
<0.4.0to<2.0.0 - Extended langchain-community constraint from
<0.4.0to<2.0.0
Import Standardization (28 files)
Migrated all imports to canonical paths:
# before (deprecated)
from langchain.chat_models.base import BaseChatModel
from langchain_core.language_models.llms import BaseLLM
# after (correct)
from langchain_core.language_models import BaseChatModel, BaseLLM
Removed Deprecated Features
- Chain support : Removed from action_dispatcher.py (use Runnable instead)
- LLMParams : Removed context manager (use native LLM params)
- SummarizeDocument : Removed built-in action (implement custom)
- langchain-nvidia-ai-endpoints : Removed from optional deps (install manually untill it supports v1)
Documentation Updates
- Completely rewrote custom LLM provider guide
- Added BaseChatModel documentation (was missing!)
- Added migration guide for Chain → Runnable
- [ ] update CHANGELOG.md with breaking changes
Backward Compatibility
Examples include fallback pattern for LangChain 1.x users:
try:
from langchain.chains import RetrievalQA
except ImportError:
from langchain_classic.chains import RetrievalQA
Codecov Report
:x: Patch coverage is 64.00000% with 18 lines in your changes missing coverage. Please review.
:loudspeaker: Thoughts on this report? Let us know!
@greptileai
Summary of ALL LangChain Imports in Codebase:
-
langchain_coreimports (All CORRECT) from langchain_core.language_models import BaseLLM, BaseChatModel, BaseLanguageModel, LLM from langchain_core.callbacks.manager import ... from langchain_core.messages import ... from langchain_core.outputs import ... from langchain_core.prompts import ... from langchain_core.runnables import ... from langchain_core.tools import ... All are canonical paths from langchain-core -
langchain_communityimports (All CORRECT) from langchain_community import llms from langchain_community.cache import SQLiteCache from langchain_community.chat_models import _module_lookup from langchain_community.llms import HuggingFacePipeline All are correct community package imports -
langchain(Main Package) - (✓ CORRECT) from langchain.chat_models import init_chat_model Confirmed: init_chat_model STAYS in langchain.chat_models in v1.0
- It's part of the "streamlined core namespace"
- Provides essential model initialization
langchain_nvidia_ai_endpoints(✓ CORRECT) from langchain_nvidia_ai_endpoints import ChatNVIDIA Separate integration package, correct
all import paths are correct and V1.0 compatible!
No additional changes needed. The codebase is:
- Using canonical langchain-core paths
- Using correct langchain_community paths
- Using correct init_chat_model from main package (stays in v1.0)
- No deprecated proxy imports remaining
- Fully compatible with langchain 0.2.x, 0.3.x, and 1.x
The only import from the main langchain package (init_chat_model) is intentionally kept there in v1.0 as part of the core functionality. Everything else correctly imports from langchain-core or langchain-community.
Greptile Overview
Greptile Summary
This PR successfully migrates NeMo Guardrails to support LangChain v1.x while maintaining backward compatibility with v0.x. The migration standardizes imports to canonical langchain_core paths, removes deprecated Chain-based patterns in favor of Runnables and direct LLM invocation, and updates 28 files across the codebase.
Key Changes:
- Removed Chain support from action dispatcher (replaced with Runnable pattern)
- Migrated from
LLMChaintollm.bind()for runtime parameter configuration - Standardized imports:
from langchain_core.language_models import BaseLLM, BaseChatModel - Updated model initialization to use LangChain v1's
init_chat_model()canonical API - Added langchain-classic fallback pattern in examples for v1.x users
- Removed
langchain-nvidia-ai-endpointsfrom optional dependencies (users must install manually until v1 support)
Issues Found:
- Version constraints allow untested LangChain 2.x (
<2.0.0should be<1.0.0) - Test file
custom_chat_model.pyincorrectly implements_call()/_acall()instead of required_generate()methods for BaseChatModel
Confidence Score: 3/5
- This PR requires attention before merging due to overly permissive version constraints and a broken test implementation
- Score reflects well-executed migration patterns (Chain to Runnable, canonical imports) offset by two critical issues: (1) version constraints allowing untested LangChain 2.x releases that will introduce breaking changes, and (2) a test file implementing the wrong interface for BaseChatModel that will fail at runtime
- Pay close attention to
pyproject.toml(version constraints) andtests/test_configs/with_custom_chat_model/custom_chat_model.py(incorrect interface implementation)
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| pyproject.toml | 3/5 | Updated LangChain dependencies from <0.4.0 to <2.0.0, which is overly permissive and may introduce untested breaking changes from v2.x |
| nemoguardrails/actions/action_dispatcher.py | 5/5 | Cleanly removed deprecated Chain support, retained Runnable support for LangChain v1 compatibility |
| nemoguardrails/llm/models/langchain_initializer.py | 5/5 | Well-structured model initialization with proper fallbacks and error handling using canonical LangChain v1 imports |
| nemoguardrails/library/hallucination/actions.py | 4/5 | Migrated from LLMChain to direct llm.bind() pattern, correctly uses model_fields for Pydantic v2 and passes callbacks.handlers |
| tests/test_configs/with_custom_chat_model/custom_chat_model.py | 2/5 | Updated imports to canonical paths but implements _call/_acall methods instead of required _generate/_agenerate for BaseChatModel |
Sequence Diagram
sequenceDiagram
participant User
participant LLMRails
participant ActionDispatcher
participant LangChainInit
participant LLM as LLM/ChatModel
participant Action as Custom Actions
Note over User,Action: LangChain v1 Migration Flow
User->>LLMRails: Initialize with config
LLMRails->>LangChainInit: init_langchain_model(provider, model, mode)
alt Chat Model Initialization
LangChainInit->>LangChainInit: Try init_chat_model() (v1 canonical)
LangChainInit->>LLM: Returns BaseChatModel instance
else Text Completion Model
LangChainInit->>LangChainInit: Try text completion providers
LangChainInit->>LLM: Returns BaseLLM instance
end
LangChainInit-->>LLMRails: Initialized LLM instance
Note over LLMRails,ActionDispatcher: Removed: Chain support
User->>LLMRails: Execute action
LLMRails->>ActionDispatcher: execute_action(name, params)
alt Runnable Action (v1 pattern)
ActionDispatcher->>Action: Check isinstance(fn, Runnable)
ActionDispatcher->>Action: await runnable.ainvoke(params)
Action-->>ActionDispatcher: Result
else Function/Method Action
ActionDispatcher->>Action: fn(**params)
Action->>LLM: llm.bind(temperature=X, n=Y)
Note over Action,LLM: Use bind() instead of Chain.with_config()
Action->>LLM: await llm.agenerate(prompts, callbacks=handlers)
LLM-->>Action: LLMResult with generations
Action-->>ActionDispatcher: Result
end
ActionDispatcher-->>LLMRails: Result
LLMRails-->>User: Response