crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

result_as_answer

Open acamenhas opened this issue 1 year ago • 2 comments

Hi,

I tested the new functionality "result_as_answer" and worked fine! https://github.com/crewAIInc/crewAI/blob/7b53457ef36e14f42af20dcd3abb1ba76883b502/tests/agent_test.py#L728

In the docs it's written: "...This setup is appropriate if the intention is for the tool to provide a direct answer that does not require further processing by the agent."

But when I print the crew.usage_metrics the output seem to suggest that the agent interacted with the LLM (successful_requests>0): {'total_tokens': 942, 'prompt_tokens': 581, 'completion_tokens': 361, 'successful_requests': 2}

Imagine the multiply tool, if I set result_as_answer=True, I don't want to lose speed and money interacting with the LLM, just returning the tool output.

It's a bug or have another logic behind?

Thank you and keep up the good work!!!

acamenhas avatar Jul 14 '24 09:07 acamenhas

In source the code, the conditional check for the "result_as_answer" property, appears after the agent have executed the tasks... It shouldn't be the other way around: if this property was active, it would immediately return the result of the task without doing any AI processing:

for tool_result in self.tools_results: # type: ignore # Item "None" of "list[Any] | None" has no attribute "__iter__" (not iterable) if tool_result.get("result_as_answer", False): result = tool_result["result"]

acamenhas avatar Jul 22 '24 08:07 acamenhas

Hey @acamenhas, I believe you are correct, as I am also facing a similar issue. Despite using the result_as_answer=True in my custom tool, the agent still interacts with the LLM based on crew.usage_metrics. I also discovered that the agent does not save the result from my custom tool (with result_as_answer=True) despite using output_file in the task.

Could you let me know if you found a workaround for the unnecessary token usage? 😄

mattheus-jellyfish avatar Aug 12 '24 18:08 mattheus-jellyfish

I am facing the same issue. Its even worse that the agent now is also changing the output... Will try out the output_json attribute which I "hope" will produce a pydantic error when the schema doesnt match resulting in another llm call. BUt to be honest.. This should just be fixed, then it would save a lot of useless llm calls.

update: so, that approach doesnt work as it breaks the loop. So a summary:

  • llms (except for gpt I think) don't produce predictable json output
  • as the CrewAgentExecutor loop doesn't stop when a tool has the attribute have_forced_answer=True set meaning, the json returned by a tool will get passed through an LLM producing an unreliable result again

I am not sure how to move on from here except moving away from opensource model and use a paid gpt model that allows us to force json output. If someone else has a workaround for this let me know.

Also I checked the code, but I don't see a quick fix for this as my knowledge isn't that great within Python / CrewAI lib, otherwise I would have been more than happy to provide a PR...

cblokland90 avatar Oct 11 '24 11:10 cblokland90