langchain icon indicating copy to clipboard operation
langchain copied to clipboard

SQLDatabaseToolkit not working

Open pablomarin opened this issue 1 year ago • 7 comments

SQLDatabaseToolkit is not currently working. Se errors attached.

This is the code that creates the errors:

llm = AzureChatOpenAI(deployment_name="gpt-4",temperature=0, max_tokens=500)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(llm=llm,toolkit=toolkit,verbose=True)
Screenshot 2023-04-14 154708

And if if I add the llm parameter to the toolkit:

toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(llm=llm,toolkit=toolkit,verbose=True)

this is the error

Screenshot 2023-04-14 154906

pablomarin avatar Apr 14 '23 20:04 pablomarin

---langchain-0.0.142

toolkit = SQLDatabaseToolkit(db=db)

llm = AzureOpenAI(deployment_name="text-davinci-003-gpt", model_name="text-davinci-003",temperature=0.0)

agent_executor = create_sql_agent(
    llm=llm,
    toolkit=toolkit,
    verbose=True
)

Works for me

mckunkel avatar Apr 17 '23 18:04 mckunkel

Checking again:

It did pass the original error if "deployment_name" and "model_name" are given to the llm declaration. Now there is another error surfacing: "InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>" This error happens if I use gpt-4 as the model/deployment

llm = AzureChatOpenAI(deployment_name="gpt-4", model_name="gpt-4",temperature=0, max_tokens=500)
toolkit = SQLDatabaseToolkit(db=db)
agent = create_sql_agent(llm=llm,toolkit=toolkit,verbose=True)
response = agent.run(query_str)

see below full error dump

Thought:The covidtracking table has the information I need. I will query the total number of hospitalized patients in Texas and nationwide during July 2020.
Action: query_checker_sql_db
Action Input: SELECT state, SUM(hospitalizedIncrease) as total_hospitalized FROM covidtracking WHERE date >= '2020-07-01' AND date <= '2020-07-31' AND (state = 'TX' OR state = 'US') GROUP BY state
---------------------------------------------------------------------------
InvalidRequestError                       Traceback (most recent call last)
Input In [42], in <cell line: 1>()
----> 1 response = agent.run(query_str + suffix)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
    211     if len(args) != 1:
    212         raise ValueError("`run` supports only one positional argument.")
--> 213     return self(args[0])[self.output_keys[0]]
    215 if kwargs and not args:
    216     return self(kwargs)[self.output_keys[0]]

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
    118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
    107 self.callback_manager.on_chain_start(
    108     {"name": self.__class__.__name__},
    109     inputs,
    110     verbose=self.verbose,
    111 )
    112 try:
--> 113     outputs = self._call(inputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs)
    790 # We now enter the agent loop (until it returns something).
    791 while self._should_continue(iterations, time_elapsed):
--> 792     next_step_output = self._take_next_step(
    793         name_to_tool_map, color_mapping, inputs, intermediate_steps
    794     )
    795     if isinstance(next_step_output, AgentFinish):
    796         return self._return(next_step_output, intermediate_steps)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/agents/agent.py:695, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
    693         tool_run_kwargs["llm_prefix"] = ""
    694     # We then call the tool on the tool input to get an observation
--> 695     observation = tool.run(
    696         agent_action.tool_input,
    697         verbose=self.verbose,
    698         color=color,
    699         **tool_run_kwargs,
    700     )
    701 else:
    702     tool_run_kwargs = self.agent.tool_run_logging_kwargs()

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/tools/base.py:73, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs)
     71 except (Exception, KeyboardInterrupt) as e:
     72     self.callback_manager.on_tool_error(e, verbose=verbose_)
---> 73     raise e
     74 self.callback_manager.on_tool_end(
     75     observation, verbose=verbose_, color=color, name=self.name, **kwargs
     76 )
     77 return observation

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/tools/base.py:70, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs)
     62 self.callback_manager.on_tool_start(
     63     {"name": self.name, "description": self.description},
     64     tool_input,
   (...)
     67     **kwargs,
     68 )
     69 try:
---> 70     observation = self._run(tool_input)
     71 except (Exception, KeyboardInterrupt) as e:
     72     self.callback_manager.on_tool_error(e, verbose=verbose_)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/tools/sql_database/tool.py:111, in QueryCheckerTool._run(self, query)
    109 def _run(self, query: str) -> str:
    110     """Use the LLM to check the query."""
--> 111     return self.llm_chain.predict(query=query, dialect=self.db.dialect)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/llm.py:151, in LLMChain.predict(self, **kwargs)
    137 def predict(self, **kwargs: Any) -> str:
    138     """Format prompt with kwargs and pass to LLM.
    139 
    140     Args:
   (...)
    149             completion = llm.predict(adjective="funny")
    150     """
--> 151     return self(kwargs)[self.output_key]

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
    118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
    107 self.callback_manager.on_chain_start(
    108     {"name": self.__class__.__name__},
    109     inputs,
    110     verbose=self.verbose,
    111 )
    112 try:
--> 113     outputs = self._call(inputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/llm.py:57, in LLMChain._call(self, inputs)
     56 def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
---> 57     return self.apply([inputs])[0]

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/llm.py:118, in LLMChain.apply(self, input_list)
    116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]:
    117     """Utilize the LLM generate method for speed gains."""
--> 118     response = self.generate(input_list)
    119     return self.create_outputs(response)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/llm.py:62, in LLMChain.generate(self, input_list)
     60 """Generate LLM result from inputs."""
     61 prompts, stop = self.prep_prompts(input_list)
---> 62 return self.llm.generate_prompt(prompts, stop)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/llms/base.py:107, in BaseLLM.generate_prompt(self, prompts, stop)
    103 def generate_prompt(
    104     self, prompts: List[PromptValue], stop: Optional[List[str]] = None
    105 ) -> LLMResult:
    106     prompt_strings = [p.to_string() for p in prompts]
--> 107     return self.generate(prompt_strings, stop=stop)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/llms/base.py:140, in BaseLLM.generate(self, prompts, stop)
    138 except (KeyboardInterrupt, Exception) as e:
    139     self.callback_manager.on_llm_error(e, verbose=self.verbose)
--> 140     raise e
    141 self.callback_manager.on_llm_end(output, verbose=self.verbose)
    142 return output

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/llms/base.py:137, in BaseLLM.generate(self, prompts, stop)
    133 self.callback_manager.on_llm_start(
    134     {"name": self.__class__.__name__}, prompts, verbose=self.verbose
    135 )
    136 try:
--> 137     output = self._generate(prompts, stop=stop)
    138 except (KeyboardInterrupt, Exception) as e:
    139     self.callback_manager.on_llm_error(e, verbose=self.verbose)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/llms/openai.py:290, in BaseOpenAI._generate(self, prompts, stop)
    288     choices.extend(response["choices"])
    289 else:
--> 290     response = completion_with_retry(self, prompt=_prompts, **params)
    291     choices.extend(response["choices"])
    292 if not self.streaming:
    293     # Can't update token usage if streaming

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/llms/openai.py:99, in completion_with_retry(llm, **kwargs)
     95 @retry_decorator
     96 def _completion_with_retry(**kwargs: Any) -> Any:
     97     return llm.client.create(**kwargs)
---> 99 return _completion_with_retry(**kwargs)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
    287 @functools.wraps(f)
    288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289     return self(f, *args, **kw)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
    377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
    378 while True:
--> 379     do = self.iter(retry_state=retry_state)
    380     if isinstance(do, DoAttempt):
    381         try:

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
    312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
    313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314     return fut.result()
    316 if self.after is not None:
    317     self.after(retry_state)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
    449     raise CancelledError()
    450 elif self._state == FINISHED:
--> 451     return self.__get_result()
    453 self._condition.wait(timeout)
    455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
    401 if self._exception:
    402     try:
--> 403         raise self._exception
    404     finally:
    405         # Break a reference cycle with the exception in self._exception
    406         self = None

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
    380 if isinstance(do, DoAttempt):
    381     try:
--> 382         result = fn(*args, **kwargs)
    383     except BaseException:  # noqa: B902
    384         retry_state.set_exception(sys.exc_info())  # type: ignore[arg-type]

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/llms/openai.py:97, in completion_with_retry.<locals>._completion_with_retry(**kwargs)
     95 @retry_decorator
     96 def _completion_with_retry(**kwargs: Any) -> Any:
---> 97     return llm.client.create(**kwargs)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/completion.py:25, in Completion.create(cls, *args, **kwargs)
     23 while True:
     24     try:
---> 25         return super().create(*args, **kwargs)
     26     except TryAgain as e:
     27         if timeout is not None and time.time() > start + timeout:

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:149, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
    127 @classmethod
    128 def create(
    129     cls,
   (...)
    136     **params,
    137 ):
    138     (
    139         deployment_id,
    140         engine,
    141         timeout,
    142         stream,
    143         headers,
    144         request_timeout,
    145         typed_api_type,
    146         requestor,
    147         url,
    148         params,
--> 149     ) = cls.__prepare_create_request(
    150         api_key, api_base, api_type, api_version, organization, **params
    151     )
    153     response, _, api_key = requestor.request(
    154         "post",
    155         url,
   (...)
    160         request_timeout=request_timeout,
    161     )
    163     if stream:
    164         # must be an iterator

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:83, in EngineAPIResource.__prepare_create_request(cls, api_key, api_base, api_type, api_version, organization, **params)
     81 if typed_api_type in (util.ApiType.AZURE, util.ApiType.AZURE_AD):
     82     if deployment_id is None and engine is None:
---> 83         raise error.InvalidRequestError(
     84             "Must provide an 'engine' or 'deployment_id' parameter to create a %s"
     85             % cls,
     86             "engine",
     87         )
     88 else:
     89     if model is None and engine is None:

InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>

pablomarin avatar Apr 17 '23 20:04 pablomarin

Yeah, now I get this error also. I posted it in https://github.com/hwchase17/langchain/issues/3031

mckunkel avatar Apr 17 '23 20:04 mckunkel

Also if I use gpt-35-turbo I get another error:

llm = AzureChatOpenAI(deployment_name="gpt-35-turbo", model_name="gpt-35-turbo", temperature=0, max_tokens=500)
toolkit = SQLDatabaseToolkit(db=db)
agent = create_sql_agent(llm=llm,toolkit=toolkit,verbose=True)
response = agent.run(query_str)

Error:

> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [88], in <cell line: 1>()
----> 1 response = agent.run(query_str)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
    211     if len(args) != 1:
    212         raise ValueError("`run` supports only one positional argument.")
--> 213     return self(args[0])[self.output_keys[0]]
    215 if kwargs and not args:
    216     return self(kwargs)[self.output_keys[0]]

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
    118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
    107 self.callback_manager.on_chain_start(
    108     {"name": self.__class__.__name__},
    109     inputs,
    110     verbose=self.verbose,
    111 )
    112 try:
--> 113     outputs = self._call(inputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs)
    790 # We now enter the agent loop (until it returns something).
    791 while self._should_continue(iterations, time_elapsed):
--> 792     next_step_output = self._take_next_step(
    793         name_to_tool_map, color_mapping, inputs, intermediate_steps
    794     )
    795     if isinstance(next_step_output, AgentFinish):
    796         return self._return(next_step_output, intermediate_steps)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/agents/agent.py:672, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
    667 """Take a single step in the thought-action-observation loop.
    668 
    669 Override this to take control of how the agent makes and acts on choices.
    670 """
    671 # Call the LLM to see what to do.
--> 672 output = self.agent.plan(intermediate_steps, **inputs)
    673 # If the tool chosen is the finishing tool, then we end and return.
    674 if isinstance(output, AgentFinish):

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/agents/agent.py:385, in Agent.plan(self, intermediate_steps, **kwargs)
    383 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
    384 full_output = self.llm_chain.predict(**full_inputs)
--> 385 return self.output_parser.parse(full_output)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py:20, in MRKLOutputParser.parse(self, text)
     18 match = re.search(regex, text, re.DOTALL)
     19 if not match:
---> 20     raise ValueError(f"Could not parse LLM output: `{text}`")
     21 action = match.group(1).strip()
     22 action_input = match.group(2)

ValueError: Could not parse LLM output: `Action: list_tables_sql_db, ""`

pablomarin avatar Apr 17 '23 20:04 pablomarin

I have had similar errors with "could not parse LLM output".

engma-linguistics avatar Apr 18 '23 17:04 engma-linguistics

I am also facing the same error.Please Let me know incase you guys find a workaround

deesaw avatar May 10 '23 04:05 deesaw

mckunkel fix worked for me as well. Try using AzureOpenAI instead of AzureChatOpenAI.

naivelogic avatar May 26 '23 15:05 naivelogic

Hi, @pablomarin. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding of the current state of the issue, it seems that you encountered an error related to the usage of the llm parameter in the SQLDatabaseToolkit. The original error was resolved by using the "deployment_name" and "model_name" parameters in the llm declaration. However, a new error has surfaced when using the gpt-4 model, and another user has reported a similar error with the gpt-35-turbo model. Unfortunately, no solution or workaround has been provided yet.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain community. Let us know if you have any further questions or concerns.

dosubot[bot] avatar Sep 21 '23 16:09 dosubot[bot]

Latest versions solved the problem. Feel free to close it .

Pablo

On Thu, Sep 21, 2023 at 11:03 AM dosu-beta[bot] @.***> wrote:

Hi, @pablomarin https://github.com/pablomarin. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding of the current state of the issue, it seems that you encountered an error related to the usage of the llm parameter in the SQLDatabaseToolkit. The original error was resolved by using the "deployment_name" and "model_name" parameters in the llm declaration. However, a new error has surfaced when using the gpt-4 model, and another user has reported a similar error with the gpt-35-turbo model. Unfortunately, no solution or workaround has been provided yet.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain community. Let us know if you have any further questions or concerns.

— Reply to this email directly, view it on GitHub https://github.com/langchain-ai/langchain/issues/2914#issuecomment-1729875199 or unsubscribe https://github.com/notifications/unsubscribe-auth/AAUPWIBKA2V743O33QOLTETX3RQNNBFKMF2HI4TJMJ2XIZLTSSBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJDUOJ2WLJDOMFWWLLTXMF2GG2C7MFRXI2LWNF2HTAVFOZQWY5LFUVUXG43VMWSG4YLNMWVXI2DSMVQWIX3UPFYGLAVFOZQWY5LFVI2TONZUGUYTINBVHGSG4YLNMWUWQYLTL5WGCYTFNSWHG5LCNJSWG5C7OR4XAZNMJFZXG5LFINXW23LFNZ2KM5DPOBUWG44TQKSHI6LQMWVHEZLQN5ZWS5DPOJ42K5TBNR2WLKJVGUZDMNRRGE2DFAVEOR4XAZNFNFZXG5LFUV3GC3DVMWVDCNRWHA4TGNRTGMZYFJDUPFYGLJLMMFRGK3FFOZQWY5LFVI2TONZUGUYTINBVHGTXI4TJM5TWK4VGMNZGKYLUMU . You are receiving this email because you were mentioned.

Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub .

pablomarin avatar Sep 21 '23 17:09 pablomarin

Thank you, @pablomarin, for confirming that the latest versions have resolved the problem. We appreciate your contribution to the LangChain community. We will proceed to close the issue.

dosubot[bot] avatar Sep 21 '23 17:09 dosubot[bot]