crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

[BUG] Issue in LiteLLM: call failed | litellm.BadRequestError | VertexAIException

Open RubensZimbres opened this issue 11 months ago • 1 comments

Description

I successfully developed a Data Loss Protection tool with crewai, the run and test complete successfully, but in training, when I provide the human feedback looks good, I get the following error:

looks good
Human feedback:  looks good


LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

ERROR:root:LiteLLM call failed: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}

 Error during LLM call to classify human feedback: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}
. Retrying... (1/3)

Steps to Reproduce

  1. crewai init
  2. select gemini-1.5-flash
  3. Paste API key
  4. crewai run
  5. crewai test n -5
  6. crewai train -n 10 (error here)

Expected behavior

It was expected to successfully run all 10 iterations during training.

Screenshots/Code snippets

Screenshot from 2024-12-23 19-14-10

Operating System

Ubuntu 20.04

Python Version

3.10

crewAI Version

0.86.0

crewAI Tools Version

0.17.0

Virtual Environment

Conda

Evidence

Screenshot from 2024-12-23 19-15-30

Screenshot from 2024-12-23 19-19-20

Screenshot from 2024-12-23 19-19-57

Possible Solution

Issue probably in crew_agent_executor.py

self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))

and/or llm.py

Additional context

No other issues.

RubensZimbres avatar Dec 23 '24 22:12 RubensZimbres

When using amazon bedrock claude 3 opus model raise RateLimitError( litellm.exceptions.RateLimitError: litellm.RateLimitError: BedrockException - {"message":"Too many requests, please wait before trying again."}

Yeswanth-gif avatar Dec 25 '24 04:12 Yeswanth-gif

When using Claude sonnet: ERROR:root:LiteLLM call failed: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with ‘role’=‘user’ for Anthropic. System prompt is sent separately for Anthropic. set ‘litellm.modify_params = True’ or ‘litellm_settings:modify_params = True’ on proxy, to insert a placeholder user message - ‘.’ as the first message, Received Messages=[]

Sourav-Goyal19 avatar Jan 11 '25 07:01 Sourav-Goyal19

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Feb 10 '25 12:02 github-actions[bot]

I got similar error when using gemini for human feedback

Image

huypl53 avatar Feb 16 '25 02:02 huypl53

Hi, I have the same problem, if you debug the code you see the POST requests by LiteLLM:

  • using gemini-pro it works because the request is:
POST Request Sent from LiteLLM:
curl -X POST \
https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=yourAPIkey \
-H 'Content-Type: *****' \
-d '{'contents': [{'role': 'user', 'parts': [{'text': 'Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with \'True\' if further changes are needed, or \'False\' if the user is satisfied. **Important** Do not include any additional commentary outside of your \'True\' or \'False\' response.\n\nFeedback: "looks good"'}]}], 'generationConfig': {'temperature': 0.7, 'stop_sequences': ['\nObservation:'], 'max_output_tokens': 4096}}'
  • using gemini-1.5-flash the content is an empty list:
POST Request Sent from LiteLLM:
curl -X POST \
https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=yourAPIkey \
-H 'Content-Type: *****' \
-d '{'contents': [], 'system_instruction': {'parts': [{'text': 'Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with \'True\' if further changes are needed, or \'False\' if the user is satisfied. **Important** Do not include any additional commentary outside of your \'True\' or \'False\' response.\n\nFeedback: "looks good"'}]}, 'generationConfig': {'temperature': 0.7, 'stop_sequences': ['\nObservation:'], 'max_output_tokens': 4096}}'

nickprock avatar Feb 19 '25 16:02 nickprock

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Mar 22 '25 12:03 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Mar 27 '25 12:03 github-actions[bot]

I got similar error when using gemini for human feedback

Image

:heart:

Maybe I found the solution:

from crewai import Agent, LLM

my_llm = LLM(
    api_key=os.getenv("GEMINI_API_KEY"),
    model="gemini/gemini-1.5-flash",
    temperature=0.8,
    max_tokens=512,
    top_p=0.9,
    frequency_penalty=0.1,
    presence_penalty=0.1,
    stop=["END"]
)

RubensZimbres avatar Jun 11 '25 20:06 RubensZimbres