langchain icon indicating copy to clipboard operation
langchain copied to clipboard

SQLToolKit not passing correct llm to llm_chain with AzureChatOpenAI

Open mckunkel opened this issue 1 year ago • 1 comments

Greetings,

The

from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.chat_models import AzureChatOpenAI

db = SQLDatabase.from_uri(connection_string2)
toolkit = SQLDatabaseToolkit(db=db)

agent_executor = create_sql_agent(
    llm=AzureChatOpenAI(deployment_name="gpt-4-32k", model_name="gpt-4-32k",temperature=0.0),
    toolkit=toolkit,
    verbose=True
)
agent_executor.run("Tell me about this database")

I get the error in query_checker_sql_db

Thought:The TITLE column seems to be related to the topics in the CONTENT table. I should query this column to get the topics.
Action: query_checker_sql_db
Action Input: SELECT TOP 10 TITLE FROM CONTENTTraceback (most recent call last):
  File "/code/confluence_test.py", line 57, in <module>
    
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 213, in run
    return self(args[0])[self.output_keys[0]]
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 792, in _call
    next_step_output = self._take_next_step(
  File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 695, in _take_next_step
    observation = tool.run(
  File "/usr/local/lib/python3.9/site-packages/langchain/tools/base.py", line 73, in run
    raise e
  File "/usr/local/lib/python3.9/site-packages/langchain/tools/base.py", line 70, in run
    observation = self._run(tool_input)
  File "/code/sql_database/tool.py", line 125, in _run
    return self.llm_chain.predict(query=query, dialect=self.db.dialect)
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 151, in predict
    return self(kwargs)[self.output_key]
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 57, in _call
    return self.apply([inputs])[0]
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 118, in apply
    response = self.generate(input_list)
  File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 62, in generate
    return self.llm.generate_prompt(prompts, stop)
  File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 107, in generate_prompt
    return self.generate(prompt_strings, stop=stop)
  File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 140, in generate
    raise e
  File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 137, in generate
    output = self._generate(prompts, stop=stop)
  File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 290, in _generate
    response = completion_with_retry(self, prompt=_prompts, **params)
  File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 99, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in wrapped_f
    return self(f, *args, **kw)
  File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 406, in __call__
    do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 351, in iter
    return fut.result()
  File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
    return self.__get_result()
  File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 409, in __call__
    result = fn(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 97, in _completion_with_retry
    return llm.client.create(**kwargs)
  File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/completion.py", line 25, in create
  File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
  File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>

because it seems that the llm is still defaulting to llm=OpenAI(cache=None, verbose=False...) as seen in this values output

from SQLDatabaseToolkit

--------------------------
--------------------------
memory=None callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f982250> verbose=False prompt=PromptTemplate(input_variables=['query', 'dialect'], output_parser=None, partial_variables={}, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', template_format='f-string', validate_template=True) llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f982250>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False) output_key='text'
--------------------------
--------------------------

from create_sql_agent

--------------------------
--------------------------
memory=None callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f97f7f0> verbose=False prompt=PromptTemplate(input_variables=['query', 'dialect'], output_parser=None, partial_variables={}, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', template_format='f-string', validate_template=True) llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f97f7f0>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False) output_key='text'
--------------------------
--------------------------

mckunkel avatar Apr 17 '23 20:04 mckunkel

requires a similar fix to #3808

domchan avatar May 02 '23 13:05 domchan

Hi, @mckunkel. I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue you reported is related to the SQLToolKit not passing the correct language model (llm) to llm_chain when using AzureChatOpenAI. Currently, the llm is defaulting to llm=OpenAI instead of llm=AzureChatOpenAI, which is causing an error. Domchan commented that a similar fix is needed as in issue #3808, and it seems that the issue has been resolved by referring to that fix.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your contribution, and we appreciate your understanding as we work to manage our backlog effectively. Let us know if you have any further questions or concerns.

dosubot[bot] avatar Sep 01 '23 16:09 dosubot[bot]