[FEATURE] Bedrock Prompt Integration
Problem Statement
AWS Bedrock offer "Prompt Management" where we can create and version prompts that configure multiple aspects of a prompt including the prompt itself, the model to use, temperature, caching, tools and other metadata. Rather than separately having to retrieve a prompt using Bedrock APIs and then setting the system_prompt, model etc. it would be very useful to simply reference the prompt by ARN thus getting most of the information currently set using individual properties for prompt text, model, temperature etc.
Proposed Solution
AWS Bedrock Prompt Management encapsulates much more than just prompt text:
- Prompt text/template - The actual prompt content
-
Model configuration - Which model to use (
model_id) -
Inference parameters -
temperature,top_p,max_tokens, etc. - Additional metadata - Versioning, tags, and other prompt-specific settings
1. Fetch Complete Prompt Configuration
Fetch the full prompt configuration including model settings:
class BedrockPromptManager:
def get_prompt_config(self, prompt_id: str, prompt_version: str = "$LATEST") -> dict:
"""Fetch complete prompt configuration from Bedrock Prompt Management"""
response = self.client.get_prompt(
promptIdentifier=prompt_id,
promptVersion=prompt_version
)
# Extract all components
return {
'prompt_text': self._extract_prompt_text(response),
'model_id': response.get('modelId'),
'inference_config': {
'temperature': response.get('temperature'),
'top_p': response.get('topP'),
'max_tokens': response.get('maxTokens'),
# ... other inference parameters
}
}
2. Apply Configuration to BedrockModel
The BedrockModel already supports all these configuration options through its BedrockConfig TypedDict
The integration would need to:
- Fetch the prompt configuration
- Update the
BedrockModelconfig with the fetched parameters - Pass the prompt text as
system_prompt
3. Integration at Agent Level
# Option A: At initialization
agent = Agent(
bedrock_prompt_id="my-prompt-id",
bedrock_prompt_version="1"
)
# This would internally:
# 1. Fetch prompt config from Bedrock
# 2. Create BedrockModel with the fetched model_id and inference params
# 3. Set system_prompt from the fetched prompt text
# Option B: Override at runtime
agent(
"user query",
bedrock_prompt_override={
'prompt_id': 'my-prompt-id',
'prompt_version': '1'
}
)
4. Model Configuration Override
The key is that BedrockModel.update_config() already supports updating all these parameters
So the implementation would:
- Fetch prompt configuration from Bedrock Prompt Management
- Call
model.update_config(model_id=..., temperature=..., top_p=..., max_tokens=...) - Pass the prompt text through the existing
system_promptflow
5. Request Formatting
The format_request method already handles all these parameters
The inference config gets properly formatted into the Bedrock request
Notes
The critical insight is that Bedrock Prompts are not just text - they're a complete configuration bundle that includes model selection and inference parameters. The solution must fetch and apply all of these settings, not just the prompt text. The good news is that BedrockModel already supports all the necessary configuration options through its BedrockConfig, so the integration would primarily involve fetching the prompt configuration and applying it to the model.
Use Case
- Realizing the purpose of prompt management outside the AWS eco system integrating it into Strands.
- Make it easier to test different prompt configurations with different versions of prompts, different prompt text and model configurations.
- Run agentic workflows with different configurations.
- Run evals with different configurations.
Alternatives Solutions
Integrate with aws prompt management using APIs outside of strands and use the data fetched to set the individual strands properties; system_prompt, model etc.
Additional Context
No response
Hi @linvald,
Thank you for raising this idea - it's indeed a valuable feature request. After careful evaluation, we recommend implementing your own prompt manager by leveraging our Hook system. This approach would allow you to access the invocation_state object when firing the BeforeModelCallEvent.
@JackYPCOnline I’d like to chime in and strongly support this feature request as well. Our organization is fully AWS-based, and having native integration with Bedrock Prompt Management would be incredibly valuable for our workflow. While Strands is open-source and flexible, being able to reference a Bedrock prompt ARN and automatically inherit its model settings, parameters, and prompt configuration would eliminate a lot of duplicated effort and custom code we’d otherwise need to build and maintain.
Implementing this natively would be a big quality-of-life improvement for AWS-centric teams like ours, and would help Strands fit more naturally into an existing Bedrock-based ecosystem.
@shawn-albert I understand this decision may be frustrating, and I appreciate your feedback. After careful evaluation of technical complexity, resource availability, and customer priorities, the team decided to defer this feature for now.
We periodically review our backlog and re-prioritize based on customer needs and community interest. We'll continue monitoring feedback on this request. Additionally, we welcome community contributions—if you'd like to submit a PR addressing your use case, we're happy to review it and provide guidance.
Just going to add my concurring opinion with @shawn-albert that this should definitely be implemented. Prompt management is critically important to any production-grade agentic workflows.
I don't necessarily agree with the proposed implementation method though as I think it ties the core Agent too closely to a specific implementation system (Bedrock). Instead, in my opinion, it would be ideal to have a construct like ManagedPromptAgent where you can define a managed prompt resource and overrides (as needed):
my_agent = ManagedPromptAgent(
managed_prompt=BedrockManagedPrompt('id', 'version'),
additional_tools=[ ... extra tools here ... ],
... other existing Agent props like session_manager, state, etc...
);
Hey folks,
I definitely see the value in integrating with the BedrockManagedPrompt feature, but I think this is possible today with hooks. You could create a hook that looks like this:
class BedrockManagedPrompt(HookProvider):
def __init__():
# just sudocode for the bedrock client
self.client = bedrock_managed_prompt_client()
def register_hooks(self, registry: HookRegistry) -> None:
registry.add_callback(BeforeModelCallEvent, self.bedrock_managed_prompt)
def bedrock_managed_prompt(self, event: BeforeModelCallEvent) -> None:
response = self.client.get_prompt()
event.agent.system_prompt. = response.system_prompt
event.agent.model.update_config("model_id") = response.model_id
# Passed in via the hooks parameter
agent = Agent(hooks=[BedrockManagedPrompt()])
I'm a bit hesitant to build something directly into the top-level agent class for this as a BedrockManagedPrompt is pretty specific to aws and bedrock, while strands supports models from any provider.
Another path forward might be to allow callbacks on the agents system prompt? That way when we try to call a model provider, we can first call the callback to allow the system prompt to be generated on the fly.
Let me know if the hook proposal I have above, or a potentially new system prompt callback feature, would address what you are looking for out of this feature request.