crewAI
crewAI copied to clipboard
[BUG] Issue in LiteLLM: call failed | litellm.BadRequestError | VertexAIException
Description
I successfully developed a Data Loss Protection tool with crewai, the run and test complete successfully, but in training, when I provide the human feedback looks good, I get the following error:
looks good
Human feedback: looks good
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
ERROR:root:LiteLLM call failed: litellm.BadRequestError: VertexAIException BadRequestError - {
"error": {
"code": 400,
"message": "* GenerateContentRequest.contents: contents is not specified\n",
"status": "INVALID_ARGUMENT"
}
}
Error during LLM call to classify human feedback: litellm.BadRequestError: VertexAIException BadRequestError - {
"error": {
"code": 400,
"message": "* GenerateContentRequest.contents: contents is not specified\n",
"status": "INVALID_ARGUMENT"
}
}
. Retrying... (1/3)
Steps to Reproduce
- crewai init
- select gemini-1.5-flash
- Paste API key
- crewai run
- crewai test n -5
- crewai train -n 10 (error here)
Expected behavior
It was expected to successfully run all 10 iterations during training.
Screenshots/Code snippets
Operating System
Ubuntu 20.04
Python Version
3.10
crewAI Version
0.86.0
crewAI Tools Version
0.17.0
Virtual Environment
Conda
Evidence
Possible Solution
Issue probably in crew_agent_executor.py
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
and/or llm.py
Additional context
No other issues.
When using amazon bedrock claude 3 opus model raise RateLimitError( litellm.exceptions.RateLimitError: litellm.RateLimitError: BedrockException - {"message":"Too many requests, please wait before trying again."}
When using Claude sonnet: ERROR:root:LiteLLM call failed: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with ‘role’=‘user’ for Anthropic. System prompt is sent separately for Anthropic. set ‘litellm.modify_params = True’ or ‘litellm_settings:modify_params = True’ on proxy, to insert a placeholder user message - ‘.’ as the first message, Received Messages=[]
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
I got similar error when using gemini for human feedback
Hi, I have the same problem, if you debug the code you see the POST requests by LiteLLM:
- using
gemini-proit works because the request is:
POST Request Sent from LiteLLM:
curl -X POST \
https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=yourAPIkey \
-H 'Content-Type: *****' \
-d '{'contents': [{'role': 'user', 'parts': [{'text': 'Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with \'True\' if further changes are needed, or \'False\' if the user is satisfied. **Important** Do not include any additional commentary outside of your \'True\' or \'False\' response.\n\nFeedback: "looks good"'}]}], 'generationConfig': {'temperature': 0.7, 'stop_sequences': ['\nObservation:'], 'max_output_tokens': 4096}}'
- using
gemini-1.5-flashthe content is an empty list:
POST Request Sent from LiteLLM:
curl -X POST \
https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=yourAPIkey \
-H 'Content-Type: *****' \
-d '{'contents': [], 'system_instruction': {'parts': [{'text': 'Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with \'True\' if further changes are needed, or \'False\' if the user is satisfied. **Important** Do not include any additional commentary outside of your \'True\' or \'False\' response.\n\nFeedback: "looks good"'}]}, 'generationConfig': {'temperature': 0.7, 'stop_sequences': ['\nObservation:'], 'max_output_tokens': 4096}}'
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
I got similar error when using gemini for human feedback
:heart:
Maybe I found the solution:
from crewai import Agent, LLM
my_llm = LLM(
api_key=os.getenv("GEMINI_API_KEY"),
model="gemini/gemini-1.5-flash",
temperature=0.8,
max_tokens=512,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"]
)