llama_index icon indicating copy to clipboard operation
llama_index copied to clipboard

[Question]: Can ReAct Agent reasoning without tools

Open nanyoullm opened this issue 1 year ago • 2 comments

Question Validation

  • [X] I have searched both the documentation and discord for an answer.

Question

I am trying the ReActAgent example provided at this link: https://docs.llamaindex.ai/en/stable/examples/agent/react_agent/?h=react. I attempted to comment out one tool, hoping that the Agent would use the large model's inference capabilities when tools are insufficient. Additionally, I added tool_choice='auto' in the .chat method. I choose LlamaAPI as my llm.

my code:


from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
from llama_index.core.tools import BaseTool, FunctionTool
from llama_index.llms.llama_api import LlamaAPI
from llama_index.llms.groq import Groq

import os 
os.environ["http_proxy"] = "http://127.0.0.1:10887"
os.environ["https_proxy"] = "http://127.0.0.1:10887"
os.environ["GROQ_API_KEY"] = ""

def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b


multiply_tool = FunctionTool.from_defaults(fn=multiply)

# def add(a: int, b: int) -> int:
#     """Add two integers and returns the result integer"""
#     return a + b

# add_tool = FunctionTool.from_defaults(fn=add)

llm2 = LlamaAPI(model='llama3-70b', api_key='')

agent = ReActAgent.from_tools([multiply_tool], llm=llm2, verbose=True)
response = agent.chat("What is 20+(2*4)? Calculate step by step ", tool_choice='auto')
print('end')

the log

Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: multiply
Action Input: {'a': 2, 'b': 4}
Observation: 8
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
Thought: I have the result of the multiplication, now I need to add 20 to it.
Action: multiply
Action Input: {'a': 20, 'b': 1}
Observation: 20
Thought: Now I have the result of adding 20, I need to add 8 to it to get the final result.
Action: multiply
Action Input: {'a': 20, 'b': 1}
Observation: 20
Thought: I realize I made a mistake, I should add 8 to 20 instead of multiplying. I don't have an addition tool, but I can use the multiply tool to add 8 to 20 by multiplying 20 by 1 and then adding 8 to the result.
Action: multiply
Action Input: {'a': 8, 'b': 1}
Observation: 8
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.

So how should I configure it to leverage the inference capabilities of the large model itself when the Agent does not find the right tool from tools?

nanyoullm avatar Jun 24 '24 10:06 nanyoullm

Hey there, @nanyoullm! I'm here to help you tackle bugs, answer questions, and become a contributor. Let's dive in and tackle this challenge together! 🤖

To configure the ReActAgent to leverage the inference capabilities of the large model itself when the agent does not have the necessary tools, you can initialize the ReActAgent without specifying any tools or tool retriever. This will ensure that the agent relies solely on the large model for inference.

Here is an example of how to configure the ReActAgent:

from llama_index.llms.llama_api import LlamaAPI
from llama_index.memory.chat_memory_buffer import ChatMemoryBuffer
from llama_index.agent.react.base import ReActAgent

# Initialize the LLM
llm = LlamaAPI(model='llama3-70b', api_key='LL-5k8J6fjCyrIXrlWruKbr34Ky0npKy64U3bTZbHtxwELI5i8nTLau2Kp5MU6AopSG')

# Initialize the memory
memory = ChatMemoryBuffer.from_defaults(chat_history=[], llm=llm)

# Create the ReActAgent without tools
react_agent = ReActAgent(
    tools=[],  # No tools specified
    llm=llm,
    memory=memory,
    max_iterations=10,
    verbose=True
)

# Now you can use the react_agent for inference
response = react_agent.chat("What is 20+(2*4)? Calculate step by step")
print(response.response)

In this configuration, the ReActAgent will not have any tools to use and will rely on the large model (llm) for all inference tasks. The ReActAgent class is designed to handle scenarios where no tools or tool retrievers are provided, defaulting to using the large model for reasoning and response generation [1][2][3][4][5].

To continue talking to Dosu, mention @dosu.

dosubot[bot] avatar Jun 24 '24 10:06 dosubot[bot]

Did you figure out the solution?

nabeelraza-7 avatar Jul 18 '24 14:07 nabeelraza-7

It will fail to infere the right tool if none are given and it will try to use non existing tools and waste cycles. What you can do is just define a simple catch-all Tool which in itself just calls the LLM again without knowledge about the agent flow.

from llama_index.core.base.llms.types import ChatMessage, MessageRole
from llama_index.core.llms import LLM
from llama_index.core.query_engine import CustomQueryEngine
from llama_index.core.query_engine.custom import STR_OR_RESPONSE_TYPE

class LLMQueryEngine(CustomQueryEngine):
    """
    This is a generic language model which can answer any question not handled \
by any other tool given. ALWAYS ask in complete questions.
    """

    system_prompt = """\
You are an advanced language model tasked with providing precise and comprehensive answers \
to user questions. Your responses should be as brief as possible, while including all necessary \
details to fully answer the question. Aim for clarity and completeness in every answer. \
Use clear and direct language, avoiding unnecessary words. Here are the key instructions to follow:

1. Understand the user's question thoroughly.
2. Answer with the minimum number of words needed to fully and accurately address the question.
3. Include all relevant details that are essential for a complete response.
4. Avoid extra information that doesn't directly answer the question.
5. Maintain a polite and professional tone.
6. Always answer in the language of the given question.

Always answer in normal text mode and only use structured format if they are part of your answer.
DO NOT prepend your answer with any label like 'assistat:' or 'answer:'.
The question will be in JSON format given by the USER below:
"""
    llm: LLM | None

    def __init__(self, llm: LLM):
        super().__init__()
        self.llm = llm

    def custom_query(self, query_str: str) -> STR_OR_RESPONSE_TYPE:
        """Run a custom query."""

        if isinstance(self.llm, LLM):
            chat_response = self.llm.chat(
                [
                    ChatMessage(role=MessageRole.SYSTEM, content=self.system_prompt),
                    ChatMessage(role=MessageRole.USER, content=query_str),
                ]
            )

            return str(chat_response.message.content)

        return ""

Which then can be used like that:

from llama_index.core import Settings
from llama_index.core.tools import QueryEngineTool, ToolMetadata

tools = [QueryEngineTool(
    query_engine=LLMQueryEngine(llm=Settings.llm),
    metadata=ToolMetadata(
        name="generic_llm",
        description=str(LLMQueryEngine.__doc__),
    ),
)]

Now it will always just use a normal LLM ans passthrough the user question. But beware it WILL do agent stuff and rephrase and such. But at least it seems to be a sane no-tool-given fallback. Don't know if somewhere in the depth of llamaindex such a tool already exists or could be generated by some magic function already. I needed it especially when not using openai.

Blackskyliner avatar Aug 09 '24 15:08 Blackskyliner

ReAct Agent support reasoning without tools!

leos-code avatar Dec 05 '24 08:12 leos-code

ReAct Agent support reasoning without tools!

It does, but if using non commercial models you may end up with hallucinated tools. This seems to happen as the tool system prompt is used but no tools are provided, but the prompt states to use a tool.

Blackskyliner avatar Dec 26 '24 01:12 Blackskyliner