[BUG]
Description
I am trying to connect bedrock to crewai, but even though the model id is correct and starts configuring the agents correctly, whenever an agent tries to connect to bedrock, it gets a 400 bad request error due to something done internaly by liteLLM.
Steps to Reproduce
- Set a custom LLM class for the agents
- Provide the model_id
- Set up the agents, manager agent, tasks, and crew
- use the kick-off method and wait for the output
Expected behavior
User receives an output according to the provided question
Actual behaviour: receives a 400 error message given by liteLLM. and the crew reaches the max_rpm threshold and further waits a minute to keep trying and stays in the same cycle unless I interrupt it.
Screenshots/Code snippets
Operating System
Other (specify in additional context)
Python Version
3.10
crewAI Version
0.80.0
crewAI Tools Version
0.14.0
Virtual Environment
Venv
Evidence
I am using the following snippet to instantiate the crewai LLM class:
LLM( model= chat_provider + "/" + chat_model, temperature=chat_temperature, )
where chat_provider = "bedrock" and chat_model = "meta.llama3-1-70b-instruct-v1:0". According to the YAML files configuration, the string would be something like:
llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
Possible Solution
adapt the necessary code when calling the boto3 client internally to pass the user_message received as a question from the crew.kick_off method.
Additional context
Linux pop-os 6.9.3-76060903-generic #202405300957~1726766035~22.04~4092a0e SMP PREEMPT_DYNAMIC Thu S x86_64 x86_64 x86_64 GNU/Linux
Able to reproduce. Python 3.12
crewai 0.83.0
crewai-tools 0.14.0
LLM
LLM(
model='bedrock/meta.llama3-70b-instruct-v1:0',
temperature=0.1,
region_name='us-west-2'
)
Error:
ERROR:root:LiteLLM call failed: litellm.BadRequestError: BedrockException - {"message":"The model that you are using requires the last turn in the conversation to be a user message. Add a user message to the conversation and try again."}
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
this issue still fails as of Crewai version 0.86.0. Bedrock llama does not work well at all. @joaomdmoura is there a workaround for this? So far I've connected a tool directly using boto3 but the problem relies on any agents using LLMs with bedrock llama. Please advise.
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
It still doesn't work. I'm using:
crewai 0.100.1
crewai-tools 0.33.0
@dragutin-oreski Temporarily, I am using the following script as a workaround to fix conversational turn handling in Llama 3.5.
from crewai import LLM as BaseLLM
class PatchedBedrockLLM(BaseLLM):
def call(self, prompt: str, **kwargs):
if isinstance(prompt, str):
messages = [{"role": "user", "content": prompt}]
elif isinstance(prompt, list):
flattened_content = "\n\n".join(
f"{msg.get('content', '')}" for msg in prompt
)
messages = [{"role": "user", "content": flattened_content}]
else:
raise ValueError("Invalid prompt format passed to Bedrock model")
kwargs.pop("prompt", None)
kwargs["messages"] = messages
return super().call(**kwargs)
crew_llm = PatchedBedrockLLM(
model="bedrock/us.meta.llama3-3-70b-instruct-v1:0",
temperature=0.2
)
same here
crewai 0.134.0
crewai-tools 0.48.0
@dragutin-oreski Temporarily, I am using the following script as a workaround to fix conversational turn handling in Llama 3.5.
from crewai import LLM as BaseLLM class PatchedBedrockLLM(BaseLLM): def call(self, prompt: str, **kwargs): if isinstance(prompt, str): messages = [{"role": "user", "content": prompt}] elif isinstance(prompt, list): flattened_content = "\n\n".join( f"{msg.get('content', '')}" for msg in prompt ) messages = [{"role": "user", "content": flattened_content}] else: raise ValueError("Invalid prompt format passed to Bedrock model") kwargs.pop("prompt", None) kwargs["messages"] = messages return super().call(**kwargs) crew_llm = PatchedBedrockLLM( model="bedrock/us.meta.llama3-3-70b-instruct-v1:0", temperature=0.2 )
THis helped thanks!