[Major BUG] - RemoteGraph in LangGraph 1.0: Cannot use `context` and `config` together - Forces choice between checkpointing OR middleware
Checked other resources
- [x] This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
Example Code
import uuid
from langgraph.pregel.remote import RemoteGraph
url = "http://localhost:8123"
graph_name = "agent"
remote_graph = RemoteGraph(graph_name, url=url)
context = {"user_id": "123", "thread_id": str(uuid.uuid4())}
config = {"configurable": context}
# This raises httpx.HTTPStatusError: 400 Bad Request
for chunk in remote_graph.stream(
{"messages": [{"role": "user", "content": "hello"}]},
context=context,
config=config,
stream_mode="values",
):
print(chunk)
import uuid
from langgraph.pregel.remote import RemoteGraph
url = "http://localhost:8123"
graph_name = "agent"
thread_id = str(uuid.uuid4())
remote_graph = RemoteGraph(graph_name, url=url)
context = {"user_id": "123", "thread_id": thread_id}
# First message
for chunk in remote_graph.stream(
{"messages": [{"role": "user", "content": "My name is John"}]},
context=context,
stream_mode="values",
):
pass
# Second message - should remember the name
for chunk in remote_graph.stream(
{"messages": [{"role": "user", "content": "What is my name?"}]},
context=context,
stream_mode="values",
):
last_message = chunk.get("messages", [])[-1]
print(last_message) # Agent doesn't remember "John"
Error Message and Stack Trace (if applicable)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'http://localhost:8123/threads/.../runs/stream'
Cannot specify both configurable and context. Prefer setting context alone.
Context was introduced in LangGraph 0.6.0 and is the long term planned replacement for configurable.
Description
Environment
langgraph==1.0.1
langgraph-api==0.4.7
langgraph-checkpoint==3.0.0
langgraph-checkpoint-sqlite==3.0.0
langgraph-cli==0.4.0
langgraph-prebuilt==1.0.1
langgraph-runtime-inmem==0.9.0
langgraph-sdk==0.2.4
- Python Version: 3.12
- Deployment: Local Docker containers (LangGraph API + PostgreSQL + Redis)
-
API URL:
http://localhost:8123
Problem Statement
When using RemoteGraph in LangGraph 1.0, there is a critical architectural inconsistency that forces developers to choose between:
-
Using
context(recommended in LangGraph 1.0 for middleware/Runtime) → Checkpointer/thread memory doesn't work -
Using
configwithconfigurable(required for checkpointing) → Incompatible with LangGraph 1.0 middleware architecture
The API explicitly prevents using both parameters together with the error:
Cannot specify both configurable and context. Prefer setting context alone.
Context was introduced in LangGraph 0.6.0 and is the long term planned replacement for configurable.
However, when using only context, the thread_id is not recognized by the checkpointer, resulting in no conversation memory persistence across requests.
Problem: This workaround makes it impossible to use LangGraph 1.0 middleware patterns that expect context from Runtime.
Expected vs Actual Behavior
Expected Behavior
When using context with thread_id, the RemoteGraph should:
- Accept the context parameter
- Automatically map
context["thread_id"]to the checkpointer - Make context available to middleware via
Runtime.context - Support the unified context-based architecture promoted in LangGraph 1.0
Actual Behavior
- Using
contextalone: Thread memory doesn't persist (checkpointer doesn't see thread_id) - Using
configalone: Memory works but breaks middleware patterns - Using both: API rejects the request with 400 error
Broader Architecture Impact
This issue creates a fundamental inconsistency across the LangGraph 1.0 ecosystem:
1. Middleware Expects Context
from langchain.agents.middleware import AgentMiddleware, ModelRequest, wrap_model_call
@wrap_model_call
async def dynamic_model_selection(
request: ModelRequest,
handler: Callable[[ModelRequest], Awaitable[ModelResponse]]
) -> ModelResponse:
# Middleware accesses context from Runtime
context = getattr(request.runtime, "context", {}) or {}
model_context = context.get("model_config") or {}
# Configure model based on context
model = create_chat_model(**model_context)
request.model = model
return await handler(request)
2. Checkpointing Requires Configurable
# get_state() requires config with configurable
state = remote_graph.get_state(config={"configurable": {"thread_id": thread_id}})
3. Documentation Promotes Context-Based Patterns
LangGraph 1.0 documentation and examples extensively use context for:
- Model configuration
- User preferences
- Runtime state
- Middleware communication
4. RemoteGraph Blocks Both Simultaneously
The API prevents using both parameters, forcing an impossible choice:
- Context only → Middleware works, checkpointing breaks
- Config only → Checkpointing works, middleware breaks
System Info
System Information
OS: Darwin OS Version: Darwin Kernel Version 23.6.0: Wed May 14 13:52:22 PDT 2025; root:xnu-10063.141.1.705.2~2/RELEASE_ARM64_T6000 Python Version: 3.12.7 (main, Oct 16 2024, 07:12:08) [Clang 18.1.8 ]
Package Information
langchain_core: 1.0.0 langchain: 1.0.2 langchain_community: 0.4 langsmith: 0.4.21 langchain_classic: 1.0.0 langchain_mcp_adapters: 0.1.11 langchain_openai: 1.0.1 langchain_text_splitters: 1.0.0 langgraph_api: 0.4.7 langgraph_cli: 0.4.0 langgraph_runtime_inmem: 0.9.0 langgraph_sdk: 0.2.4
Optional packages not installed
langserve
Other Dependencies
aiohttp: 3.12.15 async-timeout: Installed. No version info available. blockbuster: 1.5.25 click: 8.2.1 cloudpickle: 3.1.1 cryptography: 44.0.3 dataclasses-json: 0.6.7 httpx: 0.28.1 httpx-sse: 0.4.1 jsonpatch: 1.33 jsonschema-rs: 0.29.1 langchain-anthropic: Installed. No version info available. langchain-aws: Installed. No version info available. langchain-deepseek: Installed. No version info available. langchain-fireworks: Installed. No version info available. langchain-google-genai: Installed. No version info available. langchain-google-vertexai: Installed. No version info available. langchain-groq: Installed. No version info available. langchain-huggingface: Installed. No version info available. langchain-mistralai: Installed. No version info available. langchain-ollama: Installed. No version info available. langchain-perplexity: Installed. No version info available. langchain-together: Installed. No version info available. langchain-xai: Installed. No version info available. langgraph: 1.0.1 langgraph-checkpoint: 3.0.0 langsmith-pyo3: Installed. No version info available. mcp: 1.13.1 numpy: 2.3.2 openai: 1.109.1 openai-agents: Installed. No version info available. opentelemetry-api: 1.38.0 opentelemetry-exporter-otlp-proto-http: Installed. No version info available. opentelemetry-sdk: 1.38.0 orjson: 3.11.3 packaging: 25.0 pydantic: 2.11.7 pydantic-settings: 2.10.1 pyjwt: 2.10.1 pytest: 8.4.2 python-dotenv: 1.1.1 PyYAML: 6.0.2 pyyaml: 6.0.2 requests: 2.32.5 requests-toolbelt: 1.0.0 rich: 14.2.0 SQLAlchemy: 2.0.43 sqlalchemy: 2.0.43 sse-starlette: 2.1.3 starlette: 0.47.3 structlog: 25.4.0 tenacity: 9.1.2 tiktoken: 0.12.0 truststore: 0.10.4 typing-extensions: 4.15.0 uvicorn: 0.35.0 vcrpy: Installed. No version info available. watchfiles: 1.1.0 zstandard: 0.24.0
I agree this is an issue. Our use-case it so set the max_concurrency to avoid rate limits. And at the same time we need some contextual values. Currently it is not possible to do this when using LangGraph Server.
Hey folks - thanks for raising this and sorry for the pain here. Working on a fix here where you would end up passing thread_id in via config as a top level parameter, something like this:
config = {
...
"thread_id": "<uuid>",
...
}
Would appreciate any input on that! max_concurrency should already work with this pattern today:
import uuid
from langgraph.pregel.remote import RemoteGraph
url = "http://localhost:8123"
graph_name = "agent"
remote_graph = RemoteGraph(graph_name, url=url)
context = {"user_id": "123"}
config = {"max_concurrency": 10}
# This raises httpx.HTTPStatusError: 400 Bad Request
for chunk in remote_graph.stream(
{"messages": [{"role": "user", "content": "hello"}]},
context=context,
config=config,
stream_mode="values",
):
print(chunk)
@jdrogers940 this doesn't exactly solve the problem (passing thread_id as a top level parameter). When you do this, there's no checkpointing. It just prevents the 400 error from being shown. I have a fix already, just cleaning things up @sydney-runkle
The 400 error here isn’t about using both config and context, it’s specifically about using both configurable and context together.
In other words, this line in the error:
Cannot specify both configurable and context.
refers to the configurable field inside config, not the entire config argument.
✅ You can safely pass both config and context as long as config doesn’t contain a configurable key.
So, instead of doing this:
config = {"configurable": {"thread_id": thread_id}}
context = {"user_id": "123"}
You can move those fields up into your context:
context = {"user_id": "123", "thread_id": thread_id}
and omit the configurable block entirely.
However, with this approach, checkpointing still doesn’t work, since the checkpointer only looks for thread_id inside config["configurable"], not in context.
I’m currently working on a fix for that so checkpointing continues to work seamlessly when using only context.
@bolexyro Thanks. But what's the politics of the team? as it's still very tricky. How do we send configurable items ? ( like self-generated run/trace id so that we can do the feedback logic )
@lanchain_team ?
So I have solved the bug and opened a PR I don't think it has been looked into (@sydney-runkle, @jdrogers940) . For now you could just make a fork and copy what I did here https://github.com/langchain-ai/langgraph/pull/6438