crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

[BUG]

Open luisandino opened this issue 1 year ago • 3 comments

Description

I am trying to connect bedrock to crewai, but even though the model id is correct and starts configuring the agents correctly, whenever an agent tries to connect to bedrock, it gets a 400 bad request error due to something done internaly by liteLLM.

Steps to Reproduce

  1. Set a custom LLM class for the agents
  2. Provide the model_id
  3. Set up the agents, manager agent, tasks, and crew
  4. use the kick-off method and wait for the output

Expected behavior

User receives an output according to the provided question

Actual behaviour: receives a 400 error message given by liteLLM. and the crew reaches the max_rpm threshold and further waits a minute to keep trying and stays in the same cycle unless I interrupt it.

Screenshots/Code snippets

image

Operating System

Other (specify in additional context)

Python Version

3.10

crewAI Version

0.80.0

crewAI Tools Version

0.14.0

Virtual Environment

Venv

Evidence

I am using the following snippet to instantiate the crewai LLM class:

LLM( model= chat_provider + "/" + chat_model, temperature=chat_temperature, )

where chat_provider = "bedrock" and chat_model = "meta.llama3-1-70b-instruct-v1:0". According to the YAML files configuration, the string would be something like:

llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0

Possible Solution

adapt the necessary code when calling the boto3 client internally to pass the user_message received as a question from the crew.kick_off method.

Additional context

Linux pop-os 6.9.3-76060903-generic #202405300957~1726766035~22.04~4092a0e SMP PREEMPT_DYNAMIC Thu S x86_64 x86_64 x86_64 GNU/Linux

luisandino avatar Nov 17 '24 03:11 luisandino

Able to reproduce. Python 3.12

crewai                                   0.83.0
crewai-tools                             0.14.0

LLM

LLM(
    model='bedrock/meta.llama3-70b-instruct-v1:0',
    temperature=0.1,
    region_name='us-west-2'
  )

Error:

ERROR:root:LiteLLM call failed: litellm.BadRequestError: BedrockException - {"message":"The model that you are using requires the last turn in the conversation to be a user message. Add a user message to the conversation and try again."}

HaithamMaya avatar Nov 27 '24 18:11 HaithamMaya

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Dec 28 '24 12:12 github-actions[bot]

this issue still fails as of Crewai version 0.86.0. Bedrock llama does not work well at all. @joaomdmoura is there a workaround for this? So far I've connected a tool directly using boto3 but the problem relies on any agents using LLMs with bedrock llama. Please advise.

luisandino avatar Dec 29 '24 02:12 luisandino

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Jan 28 '25 12:01 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Feb 03 '25 12:02 github-actions[bot]

It still doesn't work. I'm using:

crewai                                   0.100.1
crewai-tools                             0.33.0

dragutin-oreski avatar Feb 10 '25 12:02 dragutin-oreski

@dragutin-oreski Temporarily, I am using the following script as a workaround to fix conversational turn handling in Llama 3.5.

from crewai import LLM as BaseLLM

class PatchedBedrockLLM(BaseLLM):
    def call(self, prompt: str, **kwargs):
        if isinstance(prompt, str):
            messages = [{"role": "user", "content": prompt}]
        elif isinstance(prompt, list):
            flattened_content = "\n\n".join(
                f"{msg.get('content', '')}" for msg in prompt
            )
            messages = [{"role": "user", "content": flattened_content}]
        else:
            raise ValueError("Invalid prompt format passed to Bedrock model")
        kwargs.pop("prompt", None)
        kwargs["messages"] = messages
        return super().call(**kwargs)

crew_llm = PatchedBedrockLLM(
    model="bedrock/us.meta.llama3-3-70b-instruct-v1:0",
    temperature=0.2
)

kyawthetkt avatar Apr 03 '25 12:04 kyawthetkt

same here

crewai                                   0.134.0
crewai-tools                             0.48.0

anton-b avatar Jul 03 '25 15:07 anton-b

@dragutin-oreski Temporarily, I am using the following script as a workaround to fix conversational turn handling in Llama 3.5.

from crewai import LLM as BaseLLM

class PatchedBedrockLLM(BaseLLM):
    def call(self, prompt: str, **kwargs):
        if isinstance(prompt, str):
            messages = [{"role": "user", "content": prompt}]
        elif isinstance(prompt, list):
            flattened_content = "\n\n".join(
                f"{msg.get('content', '')}" for msg in prompt
            )
            messages = [{"role": "user", "content": flattened_content}]
        else:
            raise ValueError("Invalid prompt format passed to Bedrock model")
        kwargs.pop("prompt", None)
        kwargs["messages"] = messages
        return super().call(**kwargs)

crew_llm = PatchedBedrockLLM(
    model="bedrock/us.meta.llama3-3-70b-instruct-v1:0",
    temperature=0.2
)

THis helped thanks!

anton-b avatar Jul 03 '25 16:07 anton-b