The Bedrock converse is not working
I'm encountering an error when trying to run the example code from the Instructor documentation for Amazon Bedrock integration. The example code fails with parameter validation errors.
Error
Traceback (most recent call last):
File "C:\Work Stuff\New folder\swo-ai-canvas-financial-agent\intstrucotr.py", line 22, in
request_dict = self._convert_to_request_dict(
File "C:\Users\ext.shreyash.patel\AppData\Local\miniconda3\envs\swo-env-1\lib\site-packages\botocore\client.py", line 1055, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "C:\Users\ext.shreyash.patel\AppData\Local\miniconda3\envs\swo-env-1\lib\site-packages\botocore\validate.py", line 381, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "response_model", must be one of: modelId, messages, system, inferenceConfig, toolConfig, guardrailConfig, additionalModelRequestFields, promptVariables, additionalModelResponseFieldPaths, requestMetadata, performanceConfig
Invalid type for parameter messages[0].content, value: Extract: Jason is 25 years old, type: <class 'str'>, valid types: <class 'list'>, <class 'tuple'>
I was facing the same issue. I believe the docs are outdated. I found an alternative approach that works:
user = client.chat.completions.create(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
system=[{'text': 'You are a helpful assistant.'}],
messages=[{'role': 'user', 'content': [{'text': "Extract: Jason is 22 years old"}]}],
response_model=User,
)
system is optional.
The format of messages and system is from BedrockRuntime.Client.converse docs (check Request Syntax for more).
I tried to use client.converse with the above formats, but it does not accept the input parameter response_model
Okay, Thanks!
@vittoriop17 Were you able to make this work using Mode.BEDROCK_TOOLS?
Still not working with BEDROCK_TOOLS
I can confirm the error as well. Is there any way to support this @jxnl ? Unfortunately, the traceback doesn't indicate that the error stems from the instructor library patching the client incorrectly. Any hints where I could start?
For those using anthropic through bedrock.
it's possible to use anthropic bedrock wrapper from anthropic import AnthropicBedrock to init the client and use with instructor.from_anthropic(client), instead of using boto3.
This also allows for async out of the box with not having to use other aws services.
from anthropic import AnthropicBedrock
bedrock_anthropic = AnthropicBedrock(
# Authenticate by either providing the keys below or use the default AWS credential providers, such as
# using ~/.aws/credentials or the "AWS_SECRET_ACCESS_KEY" and "AWS_ACCESS_KEY_ID" environment variables.
aws_access_key="<access key>",
aws_secret_key="<secret key>",
# Temporary credentials can be used with aws_session_token.
# Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html.
aws_session_token="<session_token>",
# aws_region changes the aws region to which the request is made. By default, we read AWS_REGION,
# and if that's not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.
aws_region="us-west-2",
)
client = instructor.from_anthropic(bedrock_anthropic)
model_name = 'eu.anthropic.claude-3-7-sonnet-20250219-v1:0'
client.messages.create(
model=model_name, # use the aws model name here
system=system_prompt,
messages=[{"role": "user", "content": user_content}], # your content as anthropic structure
response_model=output_model, # your pydantic model
**params
)
Here are two options that work for me...
First using AnthropicBedrock:
import os
import anthropic
import instructor
from pydantic import BaseModel
client = anthropic.AnthropicBedrock(
aws_access_key=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
aws_session_token=os.getenv("AWS_SESSION_TOKEN"),
aws_region=os.getenv("AWS_DEFAULT_REGION"),
)
instructor_client = instructor.from_anthropic(
client,
max_tokens=1024,
model="anthropic.claude-3-haiku-20240307-v1:0"
)
class User(BaseModel):
name: str
age: int
user = instructor_client.chat.completions.create(
system='You are a helpful assistant.',
messages=[{'role': 'user', 'content': "Extract: Jason is 22 years old"}],
response_model=User,
)
print(user)
# > User(name='Jason', age=22)
then next using boto3:
import os
import boto3
import instructor
from pydantic import BaseModel
# Initialize the Bedrock client
bedrock_client = boto3.client(
'bedrock-runtime',
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
aws_session_token=os.getenv("AWS_SESSION_TOKEN"),
region_name=os.getenv("AWS_REGION")
)
# Enable instructor patches for Bedrock client
instructor_client = instructor.from_bedrock(bedrock_client)
class User(BaseModel):
name: str
age: int
user = instructor_client.chat.completions.create(
modelId="anthropic.claude-3-haiku-20240307-v1:0",
system=[{'text': 'You are a helpful assistant.'}],
messages=[{'role': 'user', 'content': [{'text': "Extract: Jason is 22 years old"}]}],
response_model=User,
)
print(user)
# > User(name='Jason', age=22)
There are minor differences between both, but mainly the structure of the system and messages fields.
We have a lot of instructor code that uses chat.completions.create that I don't want to migrate, especially if we move off Bedrock. In light of this I've created a PR that processes the model and messages kwarg differences: https://github.com/567-labs/instructor/pull/1529
This would mean that you could transparently switch from from_openai (or whatever) to from_bedrock without having to change instances of client.chat.completions.create.
I haven't tested this with the newer from_provider stuff, so no idea if that works.
I separately created https://github.com/567-labs/instructor/pull/1528 to fix the documentation but this will be invalidated by #1529 once it's merged.