anything-llm icon indicating copy to clipboard operation
anything-llm copied to clipboard

[BUG/FEAT]: AWS Bedrock reasoning models with `@agent`

Open timothycarambat opened this issue 9 months ago • 6 comments

How are you running AnythingLLM?

All versions

What happened?

With the refactor of AWS Bedrock to move away from Langchain in https://github.com/Mintplex-Labs/anything-llm/pull/3537

This needs to be expanded to the agent execution provider, as the use of Langchain to use reasoning models for agent execution is not possible with the current implementation.

Current workaround Do not use reasoning models for AWS Bedrock agent execution as the content response is an array as opposed to a string - like all other models.

Are there known steps to reproduce?

Use a reasoning model for AWS bedrock and send a single agent chat. This error will manifest as a jsonString?.startsWith error - which is a red herring as the real error is the output formats from the response being mishandled by Langchain.

timothycarambat avatar Mar 27 '25 18:03 timothycarambat

@timothycarambat I think my PR #3714 resolved this, can you check?

tristan-stahnke-GPS avatar Apr 24 '25 19:04 tristan-stahnke-GPS

I was able to create agents using bedrock with my PR applied for the bedrock provider, parsed the content of a google search and sent it back to the LLM; not sure if there's another part of the @agent functionality that's needed, but would be cool to test it out, definitely want to get agents working fully with bedrock 💯

tristan-stahnke-GPS avatar Apr 25 '25 13:04 tristan-stahnke-GPS

@tristan-stahnke Amazon Bedrock has multiple models and not every model works correctly despite right IAM permissions to access. I tried agent with Deepseek and it errors out. Which Bedrock model did you this against?

Chan9390 avatar Jun 25 '25 08:06 Chan9390

@Chan9390 I primarily use Claude Sonnet / Opus models (as well as the amazon Nova etc. Models). I haven't had a chance to look at Deepseek; that would definitely be something to chase down! And making the provider more model agnostic would be ideal as well, so we can leave room for additional functionality down the road (maybe allowing capabilities to thought process tokens?) I'll take a look!

tristan-stahnke-GPS avatar Jun 25 '25 12:06 tristan-stahnke-GPS

I also get Invalid message content: empty string. 'ai' must contain non-empty content. when i invoke the @agent with Bedrock (Claude model) using IAM role (i hosted it on AWS via docker)

bhasmang-tri avatar Oct 07 '25 15:10 bhasmang-tri

Hello, Is there any update to this issue. I'm having similar issues when trying to call @agent (MCP Server) using Bedrock models. Hosted on an AWS EC2 using Docker.

Input: @agent blah blah blah

Output: (it's not always the same)

  1. nothing, just a blank response
  2. AWSBedrock::streamGetChatCompletion failed during setup. Bedrock is unable to process your request.
  3. "ai" must contain non-empty content
  4. Bedrock is unable to process your request.

I have tried this with Bedrock Claude 3.5, Claude 3.7, Nova Pro, Llama 3 70B. I'm using IAM Roles and gave it "bedrock:*" just to rule out any permissions problems with bedrock.

I have used the free Grok models, as a "control", and have gotten a proper response from it.

SquadUpSquid avatar Nov 03 '25 20:11 SquadUpSquid