langchain icon indicating copy to clipboard operation
langchain copied to clipboard

bind_tools NotImplementedError when using ChatOllama

Open hyhzl opened this issue 1 year ago • 44 comments

Checked other resources

  • [X] I added a very descriptive title to this issue.
  • [X] I searched the LangChain documentation with the integrated search.
  • [X] I used the GitHub search to find a similar question and didn't find it.
  • [X] I am sure that this is a bug in LangChain rather than my code.
  • [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

def init_ollama(model_name:str = global_model): # llm = Ollama(model=model_name) llm = ChatOllama(model=model_name) return llm

llm = init_ollama() llama2 = init_ollama(model_name=fallbacks) llm_with_fallbacks = llm.with_fallbacks([llama2])

def agent_search(): search = get_Tavily_Search() retriver = get_milvus_vector_retriver(get_webLoader_docs("https://docs.smith.langchain.com/overview"),global_model) retriver_tool = create_retriever_tool( retriver, "langsmith_search", "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!", ) tools = [search,retriver_tool] # llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) # money required prompt = hub.pull("hwchase17/openai-functions-agent") agent = create_tool_calling_agent(llm,tools,prompt) # no work.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) agent_executor.invoke({"input": "hi!"})

Error Message and Stack Trace (if applicable)

Traceback (most recent call last): File "agent.py", line 72, in agent = create_tool_calling_agent(llm,tools,prompt) File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain/agents/tool_calling_agent/base.py", line 88, in create_tool_calling_agent llm_with_tools = llm.bind_tools(tools) File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 912, in bind_tools raise NotImplementedError() NotImplementedError

Description

because ollama provide great convenient for developers to develop and practice LLM app, so hoping this issue to be handled as soon as possible Appreciate sincerely !

System Info

langchain==0.1.19 platform: centos python version 3.8.19

hyhzl avatar May 09 '24 11:05 hyhzl

@hyhzl, no random mention, please.

sbusso avatar May 09 '24 12:05 sbusso

Any one gt the solution for that

subhash137 avatar May 10 '24 11:05 subhash137

image

Even structuredoutut is not working

error -

image

subhash137 avatar May 10 '24 11:05 subhash137

You can use Ollama's OpenAI compatible API like

from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
    api_key="ollama",
    model="llama3",
    base_url="http://localhost:11434/v1",
)
llm = llm.bind_tools(tools)

Pretend the Ollama mode as OpenAI and have fun with the LLM developping!

tcztzy avatar May 11 '24 07:05 tcztzy

Thank you for your reply

On Sat, 11 May 2024, 13:12 Tang Ziya, @.***> wrote:

You can use Ollama's OpenAI compatible API https://ollama.com/blog/openai-compatibility like

from langchain_openai import ChatOpenAIllm = ChatOpenAI( api_key="ollama", model="llama3", base_url="http://localhost:11434/v1", )llm = llm.bind_tools(tools)

Pretend the Ollama mode as OpenAI and have fun with the LLM developping!

— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/21479#issuecomment-2105618237, or unsubscribe https://github.com/notifications/unsubscribe-auth/AW6YNHNN3T6STIR6I6LHJSTZBXDVBAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBVGYYTQMRTG4 . You are receiving this because you commented.Message ID: @.***>

subhash137 avatar May 11 '24 08:05 subhash137

@subhash137 . According to the Ollama docs, their Chat Completions API does not support function calling yet. Did you have any success?

alexanderp99 avatar May 13 '24 13:05 alexanderp99

Yes, I did.

On Mon, 13 May 2024, 19:24 Alexander, @.***> wrote:

@subhash137 https://github.com/subhash137 . According to the Ollama docs https://github.com/ollama/ollama/blob/main/docs/openai.md, their Chat Completions API does not support function calling yet. Did you have any success?

— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/21479#issuecomment-2107641775, or unsubscribe https://github.com/notifications/unsubscribe-auth/AW6YNHLMHTGSQSIIM7XMBDDZCDAYTAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBXGY2DCNZXGU . You are receiving this because you were mentioned.Message ID: @.***>

subhash137 avatar May 13 '24 16:05 subhash137

@subhash137 would you please show, how you achieved function calling in that way?

alexanderp99 avatar May 14 '24 06:05 alexanderp99

@subhash137 would you please show, how you achieved function calling in that way?

tcztzy's comment should work

kaminwong avatar May 14 '24 07:05 kaminwong

from langchain_openai import ChatOpenAI llm = ChatOpenAI( api_key="ollama", model="llama3", base_url="http://localhost:11434/v1", ) llm = llm.bind_tools(tools)

If you want to run locally use LM studio download the models , run the server give the api to base_url. but i prefer to use groq for faster and efficient output.

On Tue, 14 May 2024 at 13:17, KM @.***> wrote:

@subhash137 https://github.com/subhash137 would you please show, how you achieved function calling in that way?

tcztzy's comment should work

— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/21479#issuecomment-2109506407, or unsubscribe https://github.com/notifications/unsubscribe-auth/AW6YNHNVDN2RXAOE2HL4Z3LZCG6PTAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBZGUYDMNBQG4 . You are receiving this because you were mentioned.Message ID: @.***>

subhash137 avatar May 14 '24 07:05 subhash137

@subhash137 would you please show, how you achieved function calling in that way?

Oh sorry I just tried, seems the tools are not invoked this way. Did someone successfully make the model use the tools provided?

kaminwong avatar May 14 '24 10:05 kaminwong

Oh 😳 , I am sorry I didn't implemented it correctly. Just now I ran the code it is running successfully but tools are not invoked. Aa.. I don't have any idea now

On Tue, 14 May 2024, 16:21 KM, @.***> wrote:

@subhash137 https://github.com/subhash137 would you please show, how you achieved function calling in that way?

Oh sorry I just tried, seems the tools are not invoked this way. Did someone successfully make the model use the tools provided?

— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/21479#issuecomment-2109888211, or unsubscribe https://github.com/notifications/unsubscribe-auth/AW6YNHNRKDUWHP6VCW45263ZCHUCZAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBZHA4DQMRRGE . You are receiving this because you were mentioned.Message ID: @.***>

subhash137 avatar May 14 '24 10:05 subhash137

I faced this error too. is there any quick fix for this problem? using OllamaFunctions can fix it?

AmirMohamadBabaee avatar May 16 '24 15:05 AmirMohamadBabaee

#20881 (merged) already added bind_tools feature into OllamaFunctions #21625 (pending merge) adds support for tool_calls

lalanikarim avatar May 27 '24 14:05 lalanikarim

@lalanikarim can i use chat model along with function calling. As i see chatOllama does not supports bind_tools, but in documentation it is given how to bind_tools with chatOllama.

Harsh-Kesharwani avatar May 28 '24 07:05 Harsh-Kesharwani

We should use OllamaFunctions and pass the LLaMA model name as a parameter, as it includes a suitable bind_tools method for adding tools to the chain. The ChatOllama class does not possess any methods for this purpose. for more details see https://python.langchain.com/v0.1/docs/integrations/chat/ollama_functions Alternatively, we can manage this manually by defining new classes that inherit from ChatOllama, incorporating tools as parameters, and creating an appropriate invoke function to utilize these tools. @Harsh-Kesharwani

ErfanMomeniii avatar May 28 '24 10:05 ErfanMomeniii

@lalanikarim can i use chat model along with function calling. As i see chatOllama does not supports bind_tools, but in documentation it is given how to bind_tools with chatOllama.

@Harsh-Kesharwani Like @ErfanMomeniii suggested, you can use OllamaFunctions if you need function calling capabilities with Ollama. OllamaFunctions inherits from ChatOllama and adds newer bind_tools and with_structured_output functions as well as adds tool_calls property to AIMessage. While you can currently already use OllamaFunctions for function calling, there is an unmerged PR #21625 that fixes the issue where you want a chat response from OllamaFunctions in case none of the provided functions are appropriate for the request. I am hoping that to be merged sometime this week.

lalanikarim avatar May 28 '24 14:05 lalanikarim

Can someone please post a mini example of tool calling with these pr merges?

ntelo007 avatar Jun 04 '24 20:06 ntelo007

I am still not able to get it to work:

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool


@tool
def magic_function(input: int):
    """applies magic function to an input"""
    return input * -2

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad")
    ]
)

tools = [magic_function]

model = OllamaFunctions(
    model="llama3",
    # formal="json",    # commented or not, does not change the error
    keep_alive=-1,
    temperature=0,
    max_new_tokes=512,
)

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input":"What is the value of magic_function(3)"})

TypeError: Object of type StructuredTool is not JSON serializable

KIC avatar Jun 09 '24 13:06 KIC

I am still not able to get it to work:

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool


@tool
def magic_function(input: int):
    """applies magic function to an input"""
    return input * -2

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad")
    ]
)

tools = [magic_function]

model = OllamaFunctions(
    model="llama3",
    # formal="json",    # commented or not, does not change the error
    keep_alive=-1,
    temperature=0,
    max_new_tokes=512,
)

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input":"What is the value of magic_function(3)"})

TypeError: Object of type StructuredTool is not JSON serializable

This PR fixed the JSON serialization error and couple other things.

#22339

lalanikarim avatar Jun 09 '24 14:06 lalanikarim

Example notebook with tool calling from withing LangGraph agent. https://github.com/lalanikarim/notebooks/blob/main/LangGraph-MessageGraph-OllamaFunctions.ipynb

Since #22339 is not yet merged, the notebook installs langchain-expermental from my repo (source for #22339).

lalanikarim avatar Jun 10 '24 06:06 lalanikarim

@lalanikarim does agent carries context that is return by tool at every iteration, suppose i have 3 tools below is the execution flow:

agent... use tool 1 tool 1 response: resp1

agent... (does agent carries resp1 or summarization or knowledge graph for it) use tool 2 tool 2 response: resp2

agent... (does agent carries resp1, resp2 or summarization or knowledge graph for it) use tool 3 tool 3 response: resp3

The question is does agent carries tool response as a context for next iteration.

Harsh-Kesharwani avatar Jun 10 '24 09:06 Harsh-Kesharwani

@lalanikarim does agent carries context that is return by tool at every iteration, suppose i have 3 tools below is the execution flow:

agent... use tool 1 tool 1 response: resp1

agent... (does agent carries resp1 or summarization or knowledge graph for it) use tool 2 tool 2 response: resp2

agent... (does agent carries resp1, resp2 or summarization or knowledge graph for it) use tool 3 tool 3 response: resp3

The question is does agent carries tool response as a context for next iteration.

@Harsh-Kesharwani

You provide an initial state on every irritation. Unless you pass the previous context into the next iteration, the agent starts with a fresh state every time. I hope this answers your question.

initial_state = ...
updated_state = agent.invoke(initial_state)

next_initial_state = <combine updated_state and a new initial state>
updated_state = agent.invoke(next_initial_state)

lalanikarim avatar Jun 10 '24 12:06 lalanikarim

@lalanikarim can i log the prompt which is passed to the agent.

Harsh-Kesharwani avatar Jun 10 '24 13:06 Harsh-Kesharwani

@lalanikarim can i log the prompt which is passed to the agent.

@Harsh-Kesharwani I have included langtrace links for multiple runs in the notebook. Take a look and let me know if that answers your questions.

lalanikarim avatar Jun 10 '24 14:06 lalanikarim

Take a look at #22339 which should have addressed this issue. The PR was approved and merged yesterday but a release is yet to be cut from it and should happen in the next few days.

In the meantime, you may try and install langchain-experimental directly from langchain's source like this:

pip install git+https://github.com/langchain-ai/langchain.git\#egg=langchain-experimental\&subdirectory=libs/experimental

I hope this helps.

lalanikarim avatar Jun 13 '24 09:06 lalanikarim

@lalanikarim

I have been following your work to make tool calling happen for the OLLAMA.

https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/llms/ollama_functions.py

I have used the sorce code mentioned in the link for the OllamaFunctions. Eventhough I am able to convert the tools, I am still getting the error "Error executing agent: Object of type QuerySQLDataBaseTool is not JSON serializable ". The mistake maybe from my side but I am unable to figure it out. Below is the error for my code:

**Successfully converted tool: {'name': 'sql_db_query', 'parameters': {'title': '_QuerySQLDataBaseToolInput', 'type': 'object', 'properties': {'query': {'title': 'Query', 'description': 'A detailed and correct SQL query.', 'type': 'string'}}, 'required': ['query']}, 'description': "Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields."}

Successfully bound tools to LLM Successfully created agent Successfully created agent executor

Entering new AgentExecutor chain... Error executing agent: Object of type QuerySQLDataBaseTool is not JSON serializable**

My code: from langchain_experimental.llms.ollama_functions import OllamaFunctions, convert_to_ollama_tool from langchain.agents import Tool, create_tool_calling_agent, AgentExecutor

llm = OllamaFunctions(model = "llama3", format = "json", temperature = 0, keep_alive=-1) toolkit = SQLDatabaseToolkit(db=db, llm=llm, use_query_checker=True) tools = toolkit.get_tools()

converted_tools = [] for tool in tools: try: converted_tool = convert_to_ollama_tool(tool) print(f"Successfully converted tool: {converted_tool}") converted_tools.append(converted_tool) except Exception as e: print(f"Error converting tool: {tool}, Error: {e}")

try: llm_with_tools = llm.bind_tools(converted_tools) print("Successfully bound tools to LLM") except Exception as e: print(f"Error binding tools to LLM: {e}")

SQL_PREFIX = """You are an agent designed to interact with a SQL database. Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for the relevant columns given the question. You have access to tools for interacting with the database. Only use the below tools. Only use the information returned by the below tools to construct your final answer. You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.

DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.

To start you should ALWAYS look at the tables in the database to see what you can query. Do NOT skip this step. Then you should query the schema of the most relevant tables. You have access to the following tools:"""

prompt = ChatPromptTemplate.from_messages( [ ("system", SQL_PREFIX), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ] )

try: agent = create_tool_calling_agent(llm=llm_with_tools, tools=tools, prompt=prompt) print("Successfully created agent") except Exception as e: print(f"Error creating agent: {e}")

print("Tools passed to AgentExecutor:", converted_tools)

try: agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) print("Successfully created agent executor") except Exception as e: print(f"Error creating agent executor: {e}")

question = "give me only the attachment attempts for the left teats where the cow id with highest value "

try: response = agent_executor.invoke({"input": question}) print("Successfully executed agent") print(response) except Exception as e: print(f"Error executing agent: {e}")

Screenshot from 2024-06-13 13-14-51

Vishnullm avatar Jun 13 '24 11:06 Vishnullm

@lalanikarim

I have been following your work to make tool calling happen for the OLLAMA.

https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/llms/ollama_functions.py

I have used the sorce code mentioned in the link for the OllamaFunctions. Eventhough I am able to convert the tools, I am still getting the error "Error executing agent: Object of type QuerySQLDataBaseTool is not JSON serializable ". The mistake maybe from my side but I am unable to figure it out. Below is the error for my code:

**Successfully converted tool: {'name': 'sql_db_query', 'parameters': {'title': '_QuerySQLDataBaseToolInput', 'type': 'object', 'properties': {'query': {'title': 'Query', 'description': 'A detailed and correct SQL query.', 'type': 'string'}}, 'required': ['query']}, 'description': "Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields."}

Successfully bound tools to LLM Successfully created agent Successfully created agent executor

Entering new AgentExecutor chain... Error executing agent: Object of type QuerySQLDataBaseTool is not JSON serializable**

My code: from langchain_experimental.llms.ollama_functions import OllamaFunctions, convert_to_ollama_tool from langchain.agents import Tool, create_tool_calling_agent, AgentExecutor

llm = OllamaFunctions(model = "llama3", format = "json", temperature = 0, keep_alive=-1) toolkit = SQLDatabaseToolkit(db=db, llm=llm, use_query_checker=True) tools = toolkit.get_tools()

converted_tools = [] for tool in tools: try: converted_tool = convert_to_ollama_tool(tool) print(f"Successfully converted tool: {converted_tool}") converted_tools.append(converted_tool) except Exception as e: print(f"Error converting tool: {tool}, Error: {e}")

try: llm_with_tools = llm.bind_tools(converted_tools) print("Successfully bound tools to LLM") except Exception as e: print(f"Error binding tools to LLM: {e}")

SQL_PREFIX = """You are an agent designed to interact with a SQL database. Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for the relevant columns given the question. You have access to tools for interacting with the database. Only use the below tools. Only use the information returned by the below tools to construct your final answer. You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.

DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.

To start you should ALWAYS look at the tables in the database to see what you can query. Do NOT skip this step. Then you should query the schema of the most relevant tables. You have access to the following tools:"""

prompt = ChatPromptTemplate.from_messages( [ ("system", SQL_PREFIX), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ] )

try: agent = create_tool_calling_agent(llm=llm_with_tools, tools=tools, prompt=prompt) print("Successfully created agent") except Exception as e: print(f"Error creating agent: {e}")

print("Tools passed to AgentExecutor:", converted_tools)

try: agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) print("Successfully created agent executor") except Exception as e: print(f"Error creating agent executor: {e}")

question = "give me only the attachment attempts for the left teats where the cow id with highest value "

try: response = agent_executor.invoke({"input": question}) print("Successfully executed agent") print(response) except Exception as e: print(f"Error executing agent: {e}")

Screenshot from 2024-06-13 13-14-51

@Vishnullm I'll try this later today and provide an update.

lalanikarim avatar Jun 13 '24 19:06 lalanikarim

@Vishnullm The issue is because create_tool_calling_agent rebinds the tools to the llm and doesn't give you the opportunity to convert_to_ollama_tool.

I am investigating a more permanent fix, but for now, make the following adjustments to your code:

Original

agent = create_tool_calling_agent(llm=llm_with_tools, tools=tools, prompt=prompt)

New

from langchain.agents.format_scratchpad.openai_tools import (
    format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(
            x["intermediate_steps"]
        ),
    }
    | prompt
    | llm_with_tools
    | OpenAIToolsAgentOutputParser()
)

This code is from the custom codes notebook

lalanikarim avatar Jun 14 '24 04:06 lalanikarim

@lalanikarim , Thanks for the suggestion. My requirement is to use mainly llama llms from Ollama. With the suggestion you provided I can only use gpt models. Is there any other way around?

I tried create_sql_agent with zeroshotpromptreactdescription. When I give any complex question which requires multiple sql tables to join or to look for multiple columuns, the agent just fails and goes in a loop of redoing the same action execution even though it sometime generate the answer it does not know that it is the correct answer.

image image

As a result, I thought create_tool_calling_agent would be more effective for this use case, but I am facing the issue with function calling when I used OllamFunctions.

llm = OllamaFunctions(model = "llama3", format = "json", temperature = 0, keep_alive=-1)
toolkit = SQLDatabaseToolkit(db=db, llm=llm, use_query_checker=True)
tools = toolkit.get_tools()

converted_tools = [convert_to_ollama_tool(tool) for tool in tools]
llm_with_tools = llm.bind_tools(tools=converted_tools)
agent = create_tool_calling_agent(
    llm=llm_with_tools,
    tools=converted_tools,  
    prompt=prompt
)

agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)

when I use this code by understanding your work on OllamaFunctions wrapper, i am getting the error: Ollama call failed with status code 400. Details: {"error":"invalid options: functions"}

I believe with this structure atleast .bind_tools() is working but not the tool_calling.

Please guide me on how to make a model suitable for sql Q/A using Ollama or local llms.

Vishnullm avatar Jun 14 '24 06:06 Vishnullm