crewAI
crewAI copied to clipboard
[BUG] Arguments validation failed: 2 validation errors for DelegateWorkToolSchema
Description
I encountered an error while trying to use the tool. This was the error: Arguments validation failed: 2 validation errors for DelegateWorkToolSchema
task
Input should be a valid string [type=string_type, input_value={'description': "Identify...ontent.", 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
context
Input should be a valid string [type=string_type, input_value={'description': "As part ...ration.", 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type.
Tool Delegate work to coworker accepts these inputs: Tool Name: Delegate work to coworker
Tool Arguments: {'task': {'description': 'The task to delegate', 'type': 'str'}, 'context': {'description': 'The context for the task', 'type': 'str'}, 'coworker': {'description': 'The role/name of the coworker to delegate to', 'type': 'str'}}
Tool Description: Delegate a specific task to one of the following coworkers: FastAPI Backend Developer
The input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.
Steps to Reproduce
- main.py:
def run():
"""
Run the crew.
"""
inputs = {
'project_path': '/my/path/1',
'new_project_path': '/my/path/2',
'base_structure_path': '/my/path/3'
}
result = SdCrew().crew().kickoff(inputs=inputs)
print(result)
- crew.py:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff
from crewai_tools import FileReadTool, FileWriterTool, DirectoryReadTool
from langchain_openai import ChatOpenAI
@CrewBase
class SdCrew():
"""SdCrew crew"""
tools = [DirectoryReadTool(), FileReadTool(), FileWriterTool()]
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def tech_lead(self) -> Agent:
return Agent(
config=self.agents_config['tech_lead'],
tools=[DirectoryReadTool(), FileReadTool()],
verbose=True
)
@agent
def back_end_developer(self) -> Agent:
return Agent(
config=self.agents_config['back_end_developer'],
tools=self.tools,
allow_code_execution=True,
verbose=True
)
@agent
def front_end_developer(self) -> Agent:
return Agent(
config=self.agents_config['front_end_developer'],
tools=self.tools,
allow_code_execution=True,
verbose=True
)
@agent
def qa_engineer(self) -> Agent:
return Agent(
config=self.agents_config['qa_engineer'],
tools=self.tools,
allow_code_execution=True,
verbose=True
)
@agent
def database_specialist(self) -> Agent:
return Agent(
config=self.agents_config['database_specialist'],
allow_code_execution=True,
verbose=True
)
@task
def architecture_planning(self) -> Task:
return Task(
config=self.tasks_config['architecture_planning']
)
@task
def api_migration(self) -> Task:
return Task(
config=self.tasks_config['api_migration']
)
@task
def business_logic_migration(self) -> Task:
return Task(
config=self.tasks_config['business_logic_migration']
)
@task
def component_migration(self) -> Task:
return Task(
config=self.tasks_config['component_migration']
)
@task
def state_management_migration(self) -> Task:
return Task(
config=self.tasks_config['state_management_migration']
)
@task
def testing_automation(self) -> Task:
return Task(
config=self.tasks_config['testing_automation']
)
@task
def postgresql_queries_migration(self) -> Task:
return Task(
config=self.tasks_config['postgresql_queries_migration']
)
@task
def system_validation(self) -> Task:
return Task(
config=self.tasks_config['system_validation']
)
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff: Project to be migrated: {inputs.get('project_path')}; New project directory: {inputs.get('new_project_path')}; Base structure directory: {inputs.get('base_structure_path')}")
return inputs
@crew
def crew(self) -> Crew:
"""Creates the SdCrew crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(temperature=0.1, model_name="gpt-4o-mini"),
respect_context_window=True,
memory=True,
planning=True,
verbose=True
)
- Run
crewai run.
I got a bit reluctant about sharing agents and yasks.yaml. If you really need the files, let me know.
Expected behavior
No delegation error between agents.
Screenshots/Code snippets
Operating System
Other (specify in additional context)
Python Version
3.10
crewAI Version
0.86.0
crewAI Tools Version
0.17.0
Virtual Environment
Conda
Evidence
Print above.
Possible Solution
None.
Additional context
I'm using Linux Mint 21.2 Victoria.
Same here while using the example code from https://docs.crewai.com/how-to/custom-manager-agent
Error:
I encountered an error while trying to use the tool. This was the error: Arguments validation failed: 2 validation errors for DelegateWorkToolSchema
task
Input should be a valid string [type=string_type, input_value={'description': 'Generate... notes.', 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
context
Input should be a valid string [type=string_type, input_value={'description': 'The task...xplore.', 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type.
Tool Delegate work to coworker accepts these inputs: Tool Name: Delegate work to coworker
Tool Arguments: {'task': {'description': 'The task to delegate', 'type': 'str'}, 'context': {'description': 'The context for the task', 'type': 'str'}, 'coworker': {'description': 'The role/name of the coworker to delegate to', 'type': 'str'}}
Tool Description: Delegate a specific task to one of the following coworkers: Researcher, Senior Writer
The input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them..
Moving on then. I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:
Used code:
import os
from crewai import Agent, Task, Crew, Process
from dotenv import load_dotenv
load_dotenv()
# Define your agents
researcher = Agent(
role="Researcher",
goal="Conduct thorough research and analysis on AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
allow_delegation=False,
)
writer = Agent(
role="Senior Writer",
goal="Create compelling content about AI and AI agents",
backstory="You're a senior writer, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently writing content for a new client.",
allow_delegation=False,
)
# Define your task
task = Task(
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
)
# Define the manager agent
manager = Agent(
role="Project Manager",
goal="Efficiently manage the crew and ensure high-quality task completion",
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
allow_delegation=True,
)
# Instantiate your crew with a custom manager
crew = Crew(
agents=[researcher, writer],
tasks=[task],
manager_agent=manager,
process=Process.hierarchical,
verbose=True
)
# Start the crew's work
result = crew.kickoff()
Same here. any idea how to fix that ?
Should I downgrade so that I don't get this error? Which version is stable?
Same error on MAC using Flow, unable to use hierarchical process at all, neither setting up manager_llm or manager_agent.
I encountered an error while trying to use the tool. This was the error: Arguments validation failed: 2 validation errors for DelegateWorkToolSchema
task
Input should be a valid string [type=string_type, input_value={'description': "Write a ...ughout.", 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
context
Input should be a valid string [type=string_type, input_value={'description': 'The chap...pments.', 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type.
Tool Delegate work to coworker accepts these inputs: Tool Name: Delegate work to coworker
Tool Arguments: {'task': {'description': 'The task to delegate', 'type': 'str'}, 'context': {'description': 'The context for the task', 'type': 'str'}, 'coworker': {'description': 'The role/name of the coworker to delegate to', 'type': 'str'}}
Tool Description: Delegate a specific task to one of the following coworkers:
Text Writer
The input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.
used latest crewai==0.95.0
Having the same problem here.
The problem appears when I try to use gpt-4o-mini. But it is solved when I change the OpenAI model to the gpt-4o version.
Input
OPENAI_MODEL_NAME=gpt-4o-mini
Output
I encountered an error while trying to use the tool.
This was the error: Arguments validation failed: 1 validation error for DelegateWorkToolSchema task
Input should be a valid string [type=string_type, input_value={'description': 'Define a... meals.', 'type': 'str'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type.
Tool Delegate work to coworker accepts these inputs: Tool Name: Delegate work to coworker
Tool Arguments: {'task': {'description': 'The task to delegate', 'type': 'str'}, 'context': {'description': 'The context for the task', 'type': 'str'}, 'coworker': {'description': 'The role/name of the coworker to delegate to', 'type': 'str'}}
having same error. anyone with a fix or an update on progress?
same problem..still, any news on a fix?
same error, waiting for a solution
same error, waiting for a solution
same error me too. I didn't understand well if it's a warning and the process continues fine if the AI does a retry or the creative process between the AI stops
I get this same problem with using GuardRails:
pydantic_core._pydantic_core.ValidationError: 1 validation error for GuardrailResult
error
Input should be a valid string [type=string_type, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
I've tried switching from gpt-4o-mini to gpt-4o and it hasn't made a difference for me. For now, I've had to remove the guardrail. It would be good to have a way to debug this to determine what inputs are being sent. If I figure out how to dive further into it then I'll report more details.
Considering it's been weeks without a response, I'd suggest moving on to another project, as this one is likely abandoned.
Its default LLM version is gpt-4o-mini change it to gpt-4o (I can fix the error, when I change LLM model to gpt-4o)
same error, i am sure this project will be terminated soon.
Alright, here's another shot at actually getting some interest in that error message. Now open for 3 months..
Here is my workaround when using gpt-4o-mini (have not seen any errors until now)
<note>
When delegating tasks, ensure the following fields are provided:
- task: A string describing the specific task to be delegated
- context: A string providing the relevant context for the task
- coworker: A string specifying the role/name of the team member to delegate to
Example:
{
"task": "Extract job details from the LinkedIn",
"context": "The job ad is located at the following URL xxxxx",
"coworker": "Data Extraction Specialist"
}
</note>
This issue seems to be the limitation of the LLM itself, which cannot handle the input schema. Using a better LLM of course can make the error appear less but does anyone know how to configure the prompt manually while calling a tool?
I have the same problem using 4o-mini.
So after some research I found the problem lies behind the LLM not understanding your tools well. I addressed this problem by better defining my docstring & the input schema. Here's some notes for anyone to tell how to handle it
- Your docstring should tell exactly what you're function is doing, the inputs it required and the output it would give
- You need to specifically tell your input properties: for example the things you could mention are (1) case sensitive (2) optional/required and (3) the type of it
Here was code that I could get the tool working well (note that the input schema definition is very important!!).
class RAGPipelineToolSchema(BaseModel):
"""Input schema for RAGPipelineTool."""
query: str = Field(
...,
description=(
"The input query string provided by the user. The name is case sensitive."
"Please provide a value of type string. This parameter is required."
),
)
class RAGPipelineTool(BaseTool):
name: str = "RAG pipeline tool"
description: str = (
"This tool implements a Retrieval-Augmented Generation (RAG) pipeline which "
"queries available data sources to provide accurate answers to user queries. "
)
args_schema: Type[BaseModel] = RAGPipelineToolSchema
@classmethod
def setup_tools(cls, community_id: str, enable_answer_skipping: bool):
"""
Setup the tool with the necessary community identifier and the flag to enable answer skipping.
"""
cls.community_id = community_id
cls.enable_answer_skipping = enable_answer_skipping
return cls
def _run(self, query: str) -> str:
"""
Execute the RAG pipeline by querying the available data sources.
Parameters
------------
query : str
The input query string provided by the user.
Returns
----------
response : str
The response obtained after querying the data sources.
"""
query_data_sources = QueryDataSources(
community_id=self.community_id,
enable_answer_skipping=self.enable_answer_skipping,
)
response = asyncio.run(query_data_sources.query(query))
return response
@amindadgar love it, tks!
So after some research I found the problem lies behind the LLM not understanding your tools well. I addressed this problem by better defining my docstring & the input schema. Here's some notes for anyone to tell how to handle it
- Your docstring should tell exactly what you're function is doing, the inputs it required and the output it would give
- You need to specifically tell your input properties: for example the things you could mention are (1) case sensitive (2) optional/required and (3) the type of it
Here was code that I could get the tool working well (note that the input schema definition is very important!!).
class RAGPipelineToolSchema(BaseModel): """Input schema for RAGPipelineTool."""
query: str = Field( ..., description=( "The input query string provided by the user. The name is case sensitive." "Please provide a value of type string. This parameter is required." ), )class RAGPipelineTool(BaseTool): name: str = "RAG pipeline tool" description: str = ( "This tool implements a Retrieval-Augmented Generation (RAG) pipeline which " "queries available data sources to provide accurate answers to user queries. " ) args_schema: Type[BaseModel] = RAGPipelineToolSchema
@classmethod def setup_tools(cls, community_id: str, enable_answer_skipping: bool): """ Setup the tool with the necessary community identifier and the flag to enable answer skipping. """ cls.community_id = community_id cls.enable_answer_skipping = enable_answer_skipping return cls def _run(self, query: str) -> str: """ Execute the RAG pipeline by querying the available data sources. Parameters ------------ query : str The input query string provided by the user. Returns ---------- response : str The response obtained after querying the data sources. """ query_data_sources = QueryDataSources( community_id=self.community_id, enable_answer_skipping=self.enable_answer_skipping, ) response = asyncio.run(query_data_sources.query(query)) return response
Hello! 🤚 The reported issue concerned the Boolean delegation of the agent that triggers a tool within the CrewAi library and fail. So the solution is to monkey patch the library? It's still a bug. All my tools work correctly because, as you say, they need to be properly described. But the issue is triggered when trying to use the hierarchical process type, which fails.