Feature Request: Enable default authentication method with Amazon EC2 Instance Profile for Amazon Bedrock LLM provider
Feature description
By default, all of the AWS SDKs (including boto3) attempt to dynamically retrieve temporary credentials using a "metadata endpoint" on EC2 instances, Lambda Functions, and AWS Fargate tasks. This authentication mechanism avoids the requirement of providing static credentials from an AWS IAM User account, in favor of creating an IAM Role identity.
According to the documentation for MetaGPT, the only supported authentication mechanism for Amazon Bedrock is to create an IAM User, with a static access key and secret key.
Could you please support using the default IAM Instance Profile, which requires no additional configuration, provided that MetaGPT is running on AWS Fargate or EC2 instances?
In AWS Fargate, it's called a "Task Role": https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
For EC2 instances it's called the "IAM Instance Profile": https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
Here's the boto3 documentation that describes the process that the AWS SDK uses to "find" credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
Boto3 will look in several locations when searching for credentials. The mechanism in which Boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
- Passing credentials as parameters in the boto3.client() method
- Passing credentials as parameters when creating a Session object
- Environment variables
- Assume role provider
- Assume role with web identity provider
- AWS IAM Identity Center credential provider
- Shared credential file (~/.aws/credentials)
- AWS config file (~/.aws/config)
- Boto2 config file (/etc/boto.cfg and ~/.boto)
- Container credential provider
- Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
I'm referring to 10 and 11 in the above screenshot, rather than specifying static credentials.
Honestly, it's quite possible that MetaGPT already supports this authentication mechanism for Bedrock, but I don't see documentation (for MetaGPT) on how to configure it to use the default credential chain.
Yes it is supported for all Hopper GEMMs. And yes that's the right scheduler to use.
As Vijay mentioned, that is the right scheduler to use. Here's a diff that I just used to adapt example 67 (groupwise) to use the stream-K scheduler:
index d6de7f89..556e74c7 100644
--- a/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu
+++ b/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu
@@ -168,7 +168,8 @@ using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collectiv
using GemmKernel = cutlass::gemm::kernel::GemmUniversal<
Shape<int,int,int,int>, // Indicates ProblemShape
CollectiveMainloopWithBlockWiseScaling,
- CollectiveEpilogue
+ CollectiveEpilogue,
+ cutlass::gemm::StreamKScheduler
>;
using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>;
@@ -691,6 +692,7 @@ int run(Options<RasterOrderOptions> &options)
GpuTimer timer;
timer.start();
for (int iter = 0; iter < options.iterations; ++iter) {
+ CUTLASS_CHECK(gemm.initialize(arguments, workspace.get()));
CUTLASS_CHECK(gemm.run());
}
timer.stop();
As Vijay mentioned, that is the right scheduler to use. Here's a diff that I just used to adapt example 67 (groupwise) to use the stream-K scheduler:
index d6de7f89..556e74c7 100644 --- a/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu +++ b/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu @@ -168,7 +168,8 @@ using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collectiv using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int,int>, // Indicates ProblemShape CollectiveMainloopWithBlockWiseScaling,
- CollectiveEpilogue
- CollectiveEpilogue,
- cutlass::gemm::StreamKScheduler
;
using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>; @@ -691,6 +692,7 @@ int run(Options<RasterOrderOptions> &options) GpuTimer timer; timer.start(); for (int iter = 0; iter < options.iterations; ++iter) {
} timer.stop();CUTLASS_CHECK(gemm.initialize(arguments, workspace.get())); CUTLASS_CHECK(gemm.run());
Is there any parameters we use to get the best performance of the streamK,like split-k-factor
If you'd like to use a split-K decomposition, you can set the splits argument as done in the Blackwell stream-K example here.
You can also consider using non-deterministic reduction, which may help performance at the expense of losing the guarantee of deterministic reduction order. See how to set this here and further description here.
If you'd like to use a split-K decomposition, you can set the
splitsargument as done in the Blackwell stream-K example here.You can also consider using non-deterministic reduction, which may help performance at the expense of losing the guarantee of deterministic reduction order. See how to set this here and further description here.
Thanks! It works for me now.
hi @jackkosaian
I observed in Nsight that using streamK will introduces a Memset op, which results in a lot of gaps between GEMM kernels in the CUDA graph mode.
Could you please explain why this happened? Is there any way to optimize it?
Thanks for your help!
We can see gaps between GEMM kernels.
As Vijay mentioned, that is the right scheduler to use. Here's a diff that I just used to adapt example 67 (groupwise) to use the stream-K scheduler:
index d6de7f89..556e74c7 100644 --- a/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu +++ b/examples/67_hopper_fp8_warp_specialized_gemm_with_blockwise_scaling/67_hopper_fp8_warp_specialized_gemm_with_groupwise_scaling.cu @@ -168,7 +168,8 @@ using CollectiveMainloopWithBlockWiseScaling = typename cutlass::gemm::collectiv using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int,int>, // Indicates ProblemShape CollectiveMainloopWithBlockWiseScaling,
- CollectiveEpilogue
- CollectiveEpilogue,
- cutlass::gemm::StreamKScheduler
;
using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>; @@ -691,6 +692,7 @@ int run(Options<RasterOrderOptions> &options) GpuTimer timer; timer.start(); for (int iter = 0; iter < options.iterations; ++iter) {
} timer.stop();CUTLASS_CHECK(gemm.initialize(arguments, workspace.get())); CUTLASS_CHECK(gemm.run());
I guess it's related to the code here
//struct PersistentTileSchedulerSm90StreamKParams
if (barrier_workspace_size > 0) {
if (workspace == nullptr) {
return Status::kErrorWorkspaceNull;
}
// Only the barrier workspace needs to be cleared for stream-K.
// Barrier workspace follows reduction workspace.
uint8_t* barrier_workspace = reinterpret_cast<uint8_t*>(workspace) + reduction_workspace_size;
return zero_workspace(static_cast<void*>(barrier_workspace), barrier_workspace_size, stream, cuda_adapter);
}
then
https://github.com/NVIDIA/cutlass/blob/833f6990e031b48b4cd2fcf55e0849c51ef6bac2/include/cutlass/workspace.h#L69-L73
Is this necessary, and how to optimize it? @jackkosaian
Yes, the memset is necessary. Stream-K uses counters in global memory for determining the order in which CTAs can accumulate their partial results. These counters needs to be initialized to zero before each invocation of the kernel.
This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.
This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.