Interrupt using the same old question, when invoked second time.
Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
Example Code
def ask_user_node(state: LookupState) -> Command[Literal['lookup_node']]:
user_response = interrupt(state['messages'][-1].content) # intead of taking the new message content its taking the old value that it has already shown to the user.
if user_response:
return Command(goto='lookup_node',
update={'messages': [HumanMessage(content=user_response, name="User_Response")]})
Error Message and Stack Trace (if applicable)
Description
I had a node that has interrupt and this node might be called multiple times, to collect information from the user. When interrupt is invoked for the second time it shows the previous question instead of showing the new question to user.
System Info
python -m langchain_core.sys_info
This has already been answered here https://github.com/langchain-ai/langgraph/issues/3072
if you’re still having issues on the latest version of thr library, please provide a https://stackoverflow.com/help/minimal-reproducible-example
@vbarda That's a different one which was raised by me, where interrupt is not actually getting interrupted for the second time.
Now that is working fine. It is getting interrupted for the second time which is good, that has been resolved in #3072 as you said, but here the new issue is that, instead of showing new question/value to the user when interrupted for the second time, its showing the same old value to the user.
Ideal scenario should be, when the interrupt invoked for the second time, it should show the new/updated question to the user instead of still showing the old question, what do you think?
hm, i double checked using a simple example and it's working correctly for me (see below). i think it's likely that there is an application error. feel free to adapt my example to reproduce the issue
from langgraph.graph import StateGraph, START
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict
class State(TypedDict):
foo: str
def my_node(state: State):
value = interrupt(f"Provide provide a new value. Previous value is: {state['foo']}")
return {"foo": value}
checkpointer = MemorySaver()
graph = StateGraph(State).add_node(my_node).add_edge(START, "my_node").add_edge("my_node", "my_node").compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "1"}}
for chunk in graph.stream({"foo": ""}, config):
print(chunk)
for chunk in graph.stream(Command(resume="foo"), config):
print(chunk)
for chunk in graph.stream(Command(resume="bar"), config):
print(chunk)
@vbarda Can I know the version you are using, please?
Also, I tried your example as well, it's working as expected. I think there is some issue with Langgraph Studio, but not sure.
=> I see the interrupt value in the langsmith that is showing the updated value correctly.
=> But at the same time in the Langgraph Studio, I see the same old message.
Not sure whats happening on why Langsmith and Langgraph are showing different values
I am using latest version of LangGraph (0.2.69). I checked the studio, and it behaves as expected for me using the example i provided above. I would suggest modifying the code example to try and reproduce the issue
@vbarda I think I figured out the issue, not sure how to explain, but I'll try my level best.
Flow 1: The interrupt is working as expected (getting interrupted properly and showing the correct updated value to the user) when I have a single graph/agent that has a node which uses an interrupt. For now, let's name this agent/graph as service graph for better understanding.
Flow 2: If I connect this service graph as a subgraph to a router graph (Supervisor agent/graph). And, once the router transfers request to the service graph, now at this stage service graph will take control and will use the interrupt as discussed above (Flow 1) and at this stage when the interrupt is called for the second time its showing me the old value in the studio. (THIS IS THE PROBLEM THAT I'M FACING)
In conclusion, what I'm trying to convey is that, If we have a single graph with a node then its working as expected (that's the reason its working for you), but if we have a multi graph and the sub graph has interrupt then its not working as expected. Again, all this stuff is happening in the Langgraph Studio (I could see the updated values in langsmith but for some reason not sure why its showing old values in studio).
Now, being in this situation it would be very very grateful if you please provide a solution for us, because there is a huge team working on a project that completely uses all the Langgraph resources. So we would be thankful to you.
@Saisiva123 we'd be more than happy to help, but we do need a https://stackoverflow.com/help/minimal-reproducible-example. unfortunately, we don't have enough resources to try and reproduce everyone's issues just based on a text description of the problem -- we need a snippet of code that can be copied, contains minimal logic and all of the imports and can be executed to reproduce the issue
i tried wrapping above example i provided in a subgraph, and it works fine for me
from langgraph.graph import StateGraph, START
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict
class State(TypedDict):
foo: str
def my_node(state: State):
value = interrupt(f"Provide provide a new value. Previous value is: {state['foo']}")
return {"foo": value}
checkpointer = MemorySaver()
subgraph = StateGraph(State).add_node(my_node).add_edge(START, "my_node").compile()
graph = StateGraph(State).add_node("subgraph", subgraph).add_edge(START, "subgraph").add_edge("subgraph", "subgraph").compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "1"}}
for chunk in graph.stream({"foo": ""}, config):
print(chunk)
for chunk in graph.stream(Command(resume="foo"), config):
print(chunk)
for chunk in graph.stream(Command(resume="bar"), config):
print(chunk)
@vbarda I created an example to reproduce the issue that I have. Please try to execute the below graph in the studio. Now you will experience the issue:
from langgraph.graph import StateGraph, START
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, Literal
from langchain.tools import tool
class MainState(TypedDict):
name: str
class SubGraphState(TypedDict):
name: str
userQuestion: str
@tool
def collect_details(name: str) -> Command[Literal['ask_user_node']]:
'''This tool is used to collect details about a specific node'''
return Command(goto='ask_user_node', update= {'userQuestion': f"Provide provide a new value. You've already provided: {name}"})
def sub_graph_node(state: SubGraphState) -> Command[Literal['ask_user_node', '__end__']]:
if state['name'] != 'sravan':
tool_response = collect_details.invoke(state['name'])
return tool_response
else:
return {'userQuestion': ''}
def ask_user_node(state: SubGraphState) -> Command[Literal['sub_graph_node']]:
value = interrupt(state['userQuestion'])
return Command(goto='sub_graph_node', update= {'name': value, 'userQuestion': ''})
subgraph = (StateGraph(SubGraphState)
.add_node("sub_graph_node", sub_graph_node)
.add_node("ask_user_node", ask_user_node)
.add_edge(START, "sub_graph_node")
.compile())
def main_node(state: MainState) -> Command[Literal['subgraph', '__end__']]:
return Command(goto = 'subgraph', update = {'name': state['name']})
checkpointer = MemorySaver()
graph = (StateGraph(MainState)
.add_node("main_node", main_node)
.add_node("subgraph", subgraph)
.add_edge(START, "main_node")
.compile(checkpointer=checkpointer))
Please try to execute the above graph in studio . First I provided the value as 'sai' and the second time I provided value as 'siva' but in the interrupt its still showing the old name('sai') that I entered.
file: langgraph.json { "graphs": { "practice": "./lookup_agent/new.py:graph" }, "python_version": "3.12", "env": ".env", "dependencies": ["./lookup_agent"] }
thank you for the example -- I could reproduce your issue in the studio desktop, but it works fine for me in the web version.
could you please confirm that it also works for you w/ web version by running langgraph dev? https://langchain-ai.github.io/langgraph/tutorials/langgraph-platform/local-server/
@vbarda It's not working for me in the Studio web version as well: (https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:58220)
I wonder how its working for you in the web version, even.
https://www.loom.com/share/8a537049364644fcab1f103565e3c22a?sid=69946f5b-56bc-4464-bc91-bd17a117789d
@vbarda I'm still confused on what difference I had that makes my code not work properly.
https://www.loom.com/share/86761d832e964685af205a0c28363bc6?sid=05404b9a-0c48-4353-a9e9-865c4c8fd366
@vbarda I know I'm bothering you on this, but It would be so great if you could connect whenever you are free. You can send a Zoom link or any meeting link on your free time. Thanks a lot.
just to confirm -- you are using the latest library versions for langgraph, langgraph-cli and langgraph-api? could you post your versions here
Sure. Here you go, please: langgraph 0.2.69 langgraph-api 0.0.22 langgraph-checkpoint 2.0.10 langgraph-cli 0.1.70 langgraph-sdk 0.1.51
@Saisiva123 can you confirm that you running local server with langgraph dev (should be running on port 2024)? i took a look at the URL you pasted earlier and it looks like it points to a custom port generated by langgraph studio desktop. if you could double check by closing studio desktop and then running langgraph dev from your project, the default URL should be https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
basically i suspect that the issue is with the underlying postgres-based API server that LangGraph studio desktop uses. langgraph dev uses in-memory persistence and shouldn't have the same issue
I think there is some issue with the langraph studio, and yeah langgraph dev works. Thanks a lot.
But this is something that langgraph team has to look into it. Didn't expect this in the studio.
But my question is, anyways in future I need to deploy to langgraph cloud and ofcourse langgraph server would be using only the postgressql as persistence in production. Wouldn't this issue be occurring there?
But once again thanks a lot for your time and help. APPRECIATED 👏.
You're right and we'll definitely look into this and fix, i just wanted to make sure that we're seeing the same behavior