blogs icon indicating copy to clipboard operation
blogs copied to clipboard

Update blogs/graphreader /graphreader_langgraph.ipynb (possibly to NOT use ollama_functions.with_structured_output?)

Open windowshopr opened this issue 1 year ago • 4 comments

Trying to run the graph reader agent notebook on my Windows 10, Python 3.11, latest version of Ollama and LangChain/Graph, I get the below traceback (I added in some extra printouts to help see the values of variables and what functions are being called when in order to help):

langgraph.invoke({"question":"Did Joan of Arc lose any battles?"})
--------------------
Step: rational_plan
Rational plan: To answer this question, we first need to find information about Joan of Arc's military campaigns and her battles against the English. We will look for details about specific battles where she was defeated or lost control, such as the Siege of Orléans, and compare them to any victories she had during that time.
message: content='' additional_kwargs={} response_metadata={} id='run-71658f84-2a46-456f-9ebe-51c4838994ca-0' tool_calls=[{'name': 'InitialNodes', 'args': {'initial_nodes': [{'key_element': 'Siege of Orléans', 'score': 80}, {'key_element': 'Joan in her only major film role', 'score': 60}, {'key_element': 'capture by Burgundians', 'score': 70}, {'key_element': 'Loire Campaign', 'score': 50}, {'key_element': 'Burgundian troops', 'score': 40}, {'key_element': 'visions from Michael, Margaret, Catherine', 'score': 30}, {'key_element': 'lead the French in battle', 'score': 90}, {'key_element': 'success in routing the English', 'score': 80}, {'key_element': "attempted to restore Dreyer's final cut", 'score': 60}, {'key_element': "Joan's trial", 'score': 70}]}, 'id': 'call_60a160c65f8b46df907ef071ee7acd6d', 'type': 'tool_call'}]
kwargs: {}
tool_calls: [{'name': 'InitialNodes', 'args': {'initial_nodes': [{'key_element': 'Siege of Orléans', 'score': 80}, {'key_element': 'Joan in her only major film role', 'score': 60}, {'key_element': 'capture by Burgundians', 'score': 70}, {'key_element': 'Loire Campaign', 'score': 50}, {'key_element': 'Burgundian troops', 'score': 40}, {'key_element': 'visions from Michael, Margaret, Catherine', 'score': 30}, {'key_element': 'lead the French in battle', 'score': 90}, {'key_element': 'success in routing the English', 'score': 80}, {'key_element': "attempted to restore Dreyer's final cut", 'score': 60}, {'key_element': "Joan's trial", 'score': 70}]}, 'id': 'call_60a160c65f8b46df907ef071ee7acd6d', 'type': 'tool_call'}]
--------------------
Step: atomic_fact_check
Reading atomic facts about: ['lead the French in battle', 'Siege of Orléans', 'success in routing the English', 'capture by Burgundians', "Joan's trial"]
message: content='' additional_kwargs={} response_metadata={} id='run-32a90a8a-772a-4336-a6b9-61a1c6357622-0' tool_calls=[{'name': 'AtomicFactOutput', 'args': {'updated_notebook': '', 'rational_next_action': "read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])", 'chosen_action': ''}, 'id': 'call_b37fa27ffbac4799980188360fe3c722', 'type': 'tool_call'}]
kwargs: {}
tool_calls: [{'name': 'AtomicFactOutput', 'args': {'updated_notebook': '', 'rational_next_action': "read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])", 'chosen_action': ''}, 'id': 'call_b37fa27ffbac4799980188360fe3c722', 'type': 'tool_call'}]
Rational for next action after atomic check: read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])

parse_function method started...

input_str: 

pattern: (\w+)(?:\((.*)\))?

match: None

No match found...

Ending parse_function...

to_return: None
Chosen action: None
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[32], line 1
----> 1 langgraph.invoke({"question":"Did Joan of Arc lose any battles?"})

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\__init__.py:1545, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
   1543 else:
   1544     chunks = []
-> 1545 for chunk in self.stream(
   1546     input,
   1547     config,
   1548     stream_mode=stream_mode,
   1549     output_keys=output_keys,
   1550     interrupt_before=interrupt_before,
   1551     interrupt_after=interrupt_after,
   1552     debug=debug,
   1553     **kwargs,
   1554 ):
   1555     if stream_mode == "values":
   1556         latest = chunk

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\__init__.py:1278, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
   1267     # Similarly to Bulk Synchronous Parallel / Pregel model
   1268     # computation proceeds in steps, while there are channel updates
   1269     # channel updates from step N are only visible in step N+1
   1270     # channels are guaranteed to be immutable for the duration of the step,
   1271     # with channel updates applied only at the transition between steps
   1272     while loop.tick(
   1273         input_keys=self.input_channels,
   1274         interrupt_before=interrupt_before_,
   1275         interrupt_after=interrupt_after_,
   1276         manager=run_manager,
   1277     ):
-> 1278         for _ in runner.tick(
   1279             loop.tasks.values(),
   1280             timeout=self.step_timeout,
   1281             retry_policy=self.retry_policy,
   1282             get_waiter=get_waiter,
   1283         ):
   1284             # emit output
   1285             yield from output()
   1286 # emit output

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\runner.py:52, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
     50 t = tasks[0]
     51 try:
---> 52     run_with_retry(t, retry_policy)
     53     self.commit(t, None)
     54 except Exception as exc:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\retry.py:29, in run_with_retry(task, retry_policy)
     27 task.writes.clear()
     28 # run the task
---> 29 task.proc.invoke(task.input, config)
     30 # if successful, end
     31 break

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py:385, in RunnableSeq.invoke(self, input, config, **kwargs)
    383 context.run(_set_config_context, config)
    384 if i == 0:
--> 385     input = context.run(step.invoke, input, config, **kwargs)
    386 else:
    387     input = context.run(step.invoke, input, config)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py:167, in RunnableCallable.invoke(self, input, config, **kwargs)
    165 else:
    166     context.run(_set_config_context, config)
--> 167     ret = context.run(self.func, input, **kwargs)
    168 if isinstance(ret, Runnable) and self.recurse:
    169     return ret.invoke(input, config)

Cell In[20], line 45, in atomic_fact_check(state)
     41 chosen_action = parse_function(atomic_facts_results.chosen_action)
     42 print(f"Chosen action: {chosen_action}")
     43 response = {
     44     "notebook": notebook,
---> 45     "chosen_action": chosen_action.get("function_name"),
     46     "check_atomic_facts_queue": [],
     47     "previous_actions": [
     48         f"atomic_fact_check({state.get('check_atomic_facts_queue')})"
     49     ],
     50 }
     51 if chosen_action.get("function_name") == "stop_and_read_neighbor":
     52     neighbors = get_neighbors_by_key_element(
     53         state.get("check_atomic_facts_queue")
     54     )

AttributeError: 'NoneType' object has no attribute 'get'

It appears that the "next action" is not being set properly. I noticed that the notebook makes use of the with_structured_output method which has been deprecated in LangChain v0.2, however the:

from langchain_community.chat_models import ChatOllama
# OR
from langchain_ollama import ChatOllama

from those latest versions don't offer the with_structured_output anymore. So the code would need to be updated to mitigate using this altogether, but just something I'm running into trying to run the code right now.

windowshopr avatar Sep 24 '24 17:09 windowshopr

Which model

V sre., 25. sep. 2024, 03:58 je oseba windowshopr @.***> napisala:

Trying to run the graph reader agent notebook on my Windows 10, Python 3.11, latest version of Ollama and LangChain/Graph, I get the below traceback (I added in some extra printouts to help see the values of variables and what functions are being called when in order to help):

langgraph.invoke({"question":"Did Joan of Arc lose any battles?"})

Step: rational_plan Rational plan: To answer this question, we first need to find information about Joan of Arc's military campaigns and her battles against the English. We will look for details about specific battles where she was defeated or lost control, such as the Siege of Orléans, and compare them to any victories she had during that time. message: content='' additional_kwargs={} response_metadata={} id='run-71658f84-2a46-456f-9ebe-51c4838994ca-0' tool_calls=[{'name': 'InitialNodes', 'args': {'initial_nodes': [{'key_element': 'Siege of Orléans', 'score': 80}, {'key_element': 'Joan in her only major film role', 'score': 60}, {'key_element': 'capture by Burgundians', 'score': 70}, {'key_element': 'Loire Campaign', 'score': 50}, {'key_element': 'Burgundian troops', 'score': 40}, {'key_element': 'visions from Michael, Margaret, Catherine', 'score': 30}, {'key_element': 'lead the French in battle', 'score': 90}, {'key_element': 'success in routing the English', 'score': 80}, {'key_element': "attempted to restore Dreyer's final cut", 'score': 60}, {'key_element': "Joan's trial", 'score': 70}]}, 'id': 'call_60a160c65f8b46df907ef071ee7acd6d', 'type': 'tool_call'}] kwargs: {} tool_calls: [{'name': 'InitialNodes', 'args': {'initial_nodes': [{'key_element': 'Siege of Orléans', 'score': 80}, {'key_element': 'Joan in her only major film role', 'score': 60}, {'key_element': 'capture by Burgundians', 'score': 70}, {'key_element': 'Loire Campaign', 'score': 50}, {'key_element': 'Burgundian troops', 'score': 40}, {'key_element': 'visions from Michael, Margaret, Catherine', 'score': 30}, {'key_element': 'lead the French in battle', 'score': 90}, {'key_element': 'success in routing the English', 'score': 80}, {'key_element': "attempted to restore Dreyer's final cut", 'score': 60}, {'key_element': "Joan's trial", 'score': 70}]}, 'id': 'call_60a160c65f8b46df907ef071ee7acd6d', 'type': 'tool_call'}]

Step: atomic_fact_check Reading atomic facts about: ['lead the French in battle', 'Siege of Orléans', 'success in routing the English', 'capture by Burgundians', "Joan's trial"] message: content='' additional_kwargs={} response_metadata={} id='run-32a90a8a-772a-4336-a6b9-61a1c6357622-0' tool_calls=[{'name': 'AtomicFactOutput', 'args': {'updated_notebook': '', 'rational_next_action': "read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])", 'chosen_action': ''}, 'id': 'call_b37fa27ffbac4799980188360fe3c722', 'type': 'tool_call'}] kwargs: {} tool_calls: [{'name': 'AtomicFactOutput', 'args': {'updated_notebook': '', 'rational_next_action': "read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])", 'chosen_action': ''}, 'id': 'call_b37fa27ffbac4799980188360fe3c722', 'type': 'tool_call'}] Rational for next action after atomic check: read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])

parse_function method started...

input_str:

pattern: (\w+)(?:((.*)))?

match: None

No match found...

Ending parse_function...

to_return: None Chosen action: None

AttributeError Traceback (most recent call last) Cell In[32], line 1 ----> 1 langgraph.invoke({"question":"Did Joan of Arc lose any battles?"})

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel_init_.py:1545, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs) 1543 else: 1544 chunks = [] -> 1545 for chunk in self.stream( 1546 input, 1547 config, 1548 stream_mode=stream_mode, 1549 output_keys=output_keys, 1550 interrupt_before=interrupt_before, 1551 interrupt_after=interrupt_after, 1552 debug=debug, 1553 **kwargs, 1554 ): 1555 if stream_mode == "values": 1556 latest = chunk

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel_init_.py:1278, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs) 1267 # Similarly to Bulk Synchronous Parallel / Pregel model 1268 # computation proceeds in steps, while there are channel updates 1269 # channel updates from step N are only visible in step N+1 1270 # channels are guaranteed to be immutable for the duration of the step, 1271 # with channel updates applied only at the transition between steps 1272 while loop.tick( 1273 input_keys=self.input_channels, 1274 interrupt_before=interrupt_before_, 1275 interrupt_after=interrupt_after_, 1276 manager=run_manager, 1277 ): -> 1278 for _ in runner.tick( 1279 loop.tasks.values(), 1280 timeout=self.step_timeout, 1281 retry_policy=self.retry_policy, 1282 get_waiter=get_waiter, 1283 ): 1284 # emit output 1285 yield from output() 1286 # emit output

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\runner.py:52, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter) 50 t = tasks[0] 51 try: ---> 52 run_with_retry(t, retry_policy) 53 self.commit(t, None) 54 except Exception as exc:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\retry.py:29, in run_with_retry(task, retry_policy) 27 task.writes.clear() 28 # run the task ---> 29 task.proc.invoke(task.input, config) 30 # if successful, end 31 break

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py:385, in RunnableSeq.invoke(self, input, config, **kwargs) 383 context.run(_set_config_context, config) 384 if i == 0: --> 385 input = context.run(step.invoke, input, config, **kwargs) 386 else: 387 input = context.run(step.invoke, input, config)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py:167, in RunnableCallable.invoke(self, input, config, **kwargs) 165 else: 166 context.run(_set_config_context, config) --> 167 ret = context.run(self.func, input, **kwargs) 168 if isinstance(ret, Runnable) and self.recurse: 169 return ret.invoke(input, config)

Cell In[20], line 45, in atomic_fact_check(state) 41 chosen_action = parse_function(atomic_facts_results.chosen_action) 42 print(f"Chosen action: {chosen_action}") 43 response = { 44 "notebook": notebook, ---> 45 "chosen_action": chosen_action.get("function_name"), 46 "check_atomic_facts_queue": [], 47 "previous_actions": [ 48 f"atomic_fact_check({state.get('check_atomic_facts_queue')})" 49 ], 50 } 51 if chosen_action.get("function_name") == "stop_and_read_neighbor": 52 neighbors = get_neighbors_by_key_element( 53 state.get("check_atomic_facts_queue") 54 )

AttributeError: 'NoneType' object has no attribute 'get'

It appears that the "next action" is not being set properly. I noticed that the notebook makes use of the with_structured_output method which has been deprecated in LangChain v0.2, however the:

from langchain_community.chat_models import ChatOllama

OR

from langchain_ollama import ChatOllama

from those latest versions don't offer the with_structured_output anymore. So the code would need to be updated to mitigate using this altogether, but just something I'm running into trying to run the code right now.

— Reply to this email directly, view it on GitHub https://github.com/tomasonjo/blogs/issues/41, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGGTMU7QNHUDMGPTZXFE3ZYGR5NAVCNFSM6AAAAABOY27QSOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU2DMMBRHEYDSNI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

tomasonjo avatar Sep 24 '24 21:09 tomasonjo

Hey! Sorry yes I am using Ollama:

model = OllamaFunctions(model="llama3.1:8b")
embeddings = OllamaEmbeddings(model="llama3.1:8b")

windowshopr avatar Sep 25 '24 00:09 windowshopr

Hi,

Is there anymore done to allow GraphReader to work with ollama locl models? I have similar issues with with_structured_ouput thank you

csaiedu avatar Oct 14 '24 12:10 csaiedu

Ollama models are bad with function outputs..

V pon., 14. okt. 2024, 19:03 je oseba csaiedu @.***> napisala:

Hi,

Is there anymore done to allow GraphReader to work with ollama locl models? I have similar issues with with_structured_ouput thank you

— Reply to this email directly, view it on GitHub https://github.com/tomasonjo/blogs/issues/41#issuecomment-2411027489, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGGTLFZNH53WYS5HRJLLTZ3OXINAVCNFSM6AAAAABOY27QSOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJRGAZDONBYHE . You are receiving this because you commented.Message ID: @.***>

tomasonjo avatar Oct 14 '24 12:10 tomasonjo