Support AWS Bedrock as an Assistant provider
Check for existing issues
- [X] Completed
Describe the feature
Add AWS Bedrock as an Assistant provider.
If applicable, add mockups / screenshots to help present your vision of the feature
No response
Looking forward to this!
I would be very excited for this feature! It would enable us to push for and recommend using Zed at my company!
++
I would like to take this on; however, I would need guidance on how/what integration points we'd need to curate for Zed.
I've started working with this and with Bedrock's model list getting longer each quarter, I thought a procedural macro might be the way to go when making the list of models. That way each build gets the most up-to-date Bedrock model without having to curate it letting the user pick the best model for their purpose.
Does Zed have a particular policy for or against proc_macros?
For the extension itself, I'm going to try and follow the Anthropic crate but instead of HTTP requests, I'll use the AWS Bedrock Runtime Client. For auth, I'll start with the standard AWS basics, access key, and secret keys, then eventually add in SSO and other things relevant to end users.
@5herlocked how is this used? 😄
Hey @alvgaona,
This feature hasn't fully been released yet, so you'll have to build main from source to access it.
Then you can make an AWS user and some Credentials (Access Key ID, Secret Access Key) for that user with the ability to access bedrock
Drop those credentials in the Zed under the configure providers menu and you'll be set 😁
Actually to that end, I'm going to make a quick change so that it's easier for people to understand and use the UI
Hey @alvgaona,
This feature hasn't fully been released yet, so you'll have to build main from source to access it.
Then you can make an AWS user and some Credentials (Access Key ID, Secret Access Key) for that user with the ability to access bedrock
Drop those credentials in the Zed under the configure providers menu and you'll be set 😁
Yes it was actually the idea; to run it from main and access it. ~Do you have a quick screenshot showing where to put the AWS credentials?~
EDIT: Nevermind. I see the screenshot in the PR.
Is it possible for this to support an AWS Profile instead of Access Key ID, Secret Access Key for credentials?
Is it possible for this to support an AWS Profile instead of Access Key ID, Secret Access Key for credentials?
https://github.com/zed-industries/zed/pull/26734
integration don't working and logs are empty, probably no one test anything before marking it as closed
@darkbroodzed 👍🏼
Should be fixed with #28350
when adding bedrock credentials, there appears to be no save button? Also not sure how I would format the creds in settings.json
@sharkfin-909 If you're in the UI, just hit enter and it saves them (same behavior as all the other providers).
Don't put your long lived credentials in the settings.json file, that's for short lived credentials managed by the AWS CLI.
Before I open a fresh new ticket, I'd like to ask a related question.
I have my bedrock setup using the below.
"language_models": {
"bedrock": {
"authentication_method": "named_profile",
"region": "us-east-1",
"profile": "sandbox-us-east-1"
}
From the bedrock page in the AWS console, pretty much all the Anthropic Claude 4 models are available to the same user login profile, as the above
However for some reason which I haven't figured out yet, the list of models in the selector, is limited, as per below
I'm hoping to be able to use the missing Claude 3.7 Sonnet, the Claude Sonnet 4 and Claude Opus 4 (not the Thinking variant) but they don't seem to be available.
The issue repeats itself also with static credentials. It kind of feels like the model list is incomplete (based on some criteria). Anyone else experiencing this, or has some idea as to why, or how to debug further?
@stv-io Thank you for using the Bedrock LM provider!
It's an issue that should be resolved by #31600 -- whenever it ends up in stable/whatever branch you're on!
The model list right now is hard-coded into the provider so one of us has to go in and update it.
I do have a plan to migrate it to use the user credentials to derive what models they have access to, but it's just a little bit messy.
Thanks for the update @5herlocked - appreciate it 🙇🏼
Hi, following this up. I apologise upfront if this is not the right place to discuss this, so if I should move my discussion elsewhere please do let me know.
I've installed Zed Preview - specifically 0.190.3 - and tried this out. Now I can indeed see the list of models, but I still couldn't interact with, for example claude-sonnet-4
Error interacting with language model client error: DisplayErrorContext(ServiceError(ServiceError { source: ValidationException(ValidationException { message: Some("Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model."), meta: ErrorMetadata { code: Some("ValidationException"), message: Some("Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model."), extras: Some({"aws_request_id": "zzzzzzzz-XXXX-XXXX-XXXX-YYyyyyYYYyyy"}) } }), raw: Response { status: StatusCode(400), headers: Headers { headers: {"date": HeaderValue { _private: H1("Mon, 09 Jun 2025 10:06:26 GMT") }, "content-type": HeaderValue { _private: H1("application/json") }, "content-length": HeaderValue { _private: H1("209") }, "connection": HeaderValue { _private: H1("keep-alive") }, "x-amzn-requestid": HeaderValue { _private: H1("zzzzzzzz-XXXX-XXXX-XXXX-YYyyyyYYYyyy") }, "x-amzn-errortype": HeaderValue { _private: H1("ValidationException:http://internal.amazon.com/coral/com.amazon.bedrock/") }} }, body: SdkBody { inner: Once(Some(b"{\"message\":\"Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn\xe2\x80\x99t supported. Retry your request with the ID or ARN of an inference profile that contains this model.\"}")), retryable: true }, extensions: Extensions { extensions_02x: Extensions, extensions_1x: Extensions } } }))
.. debugging this further, outside the editor, I got to the conclusion that I get the same errorwith
❯ aws bedrock-runtime invoke-model --model-id anthropic.claude-sonnet-4-20250514-v1:0 --body '{"prompt": "Hello, world!", "max_tokens": 100}' --region us-east-1 output.json
An error occurred (ValidationException) when calling the InvokeModel operation: Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
.. whereas specifying an inferenceProfileId will work
❯ aws bedrock-runtime invoke-model \
--model-id us.anthropic.claude-sonnet-4-20250514-v1:0 \
--body $(echo -n '{"anthropic_version":"bedrock-2023-05-31","messages":[{"role":"user","content":[{"type":"text","text":"Hello, world!"}]}],"max_tokens":100}' | base64) \
--region us-east-1 \
output.json
Is this something expected, or configurable, or a human error on my part? 😅
Hi, following this up. I apologise upfront if this is not the right place to discuss this, so if I should move my discussion elsewhere please do let me know.
I've installed Zed Preview - specifically 0.190.3 - and tried this out. Now I can indeed see the list of models, but I still couldn't interact with, for example
claude-sonnet-4
Error interacting with language model client error: DisplayErrorContext(ServiceError(ServiceError { source: ValidationException(ValidationException { message: Some("Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model."), meta: ErrorMetadata { code: Some("ValidationException"), message: Some("Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model."), extras: Some({"aws_request_id": "zzzzzzzz-XXXX-XXXX-XXXX-YYyyyyYYYyyy"}) } }), raw: Response { status: StatusCode(400), headers: Headers { headers: {"date": HeaderValue { _private: H1("Mon, 09 Jun 2025 10:06:26 GMT") }, "content-type": HeaderValue { _private: H1("application/json") }, "content-length": HeaderValue { _private: H1("209") }, "connection": HeaderValue { _private: H1("keep-alive") }, "x-amzn-requestid": HeaderValue { _private: H1("zzzzzzzz-XXXX-XXXX-XXXX-YYyyyyYYYyyy") }, "x-amzn-errortype": HeaderValue { _private: H1("ValidationException:http://internal.amazon.com/coral/com.amazon.bedrock/") }} }, body: SdkBody { inner: Once(Some(b"{\"message\":\"Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn\xe2\x80\x99t supported. Retry your request with the ID or ARN of an inference profile that contains this model.\"}")), retryable: true }, extensions: Extensions { extensions_02x: Extensions, extensions_1x: Extensions } } })).. debugging this further, outside the editor, I got to the conclusion that I get the same errorwith
❯ aws bedrock-runtime invoke-model --model-id anthropic.claude-sonnet-4-20250514-v1:0 --body '{"prompt": "Hello, world!", "max_tokens": 100}' --region us-east-1 output.jsonAn error occurred (ValidationException) when calling the InvokeModel operation: Invocation of model ID anthropic.claude-sonnet-4-20250514-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
.. whereas specifying an
inferenceProfileIdwill work❯ aws bedrock-runtime invoke-model
--model-id us.anthropic.claude-sonnet-4-20250514-v1:0
--body $(echo -n '{"anthropic_version":"bedrock-2023-05-31","messages":[{"role":"user","content":[{"type":"text","text":"Hello, world!"}]}],"max_tokens":100}' | base64)
--region us-east-1
output.json Is this something expected, or configurable, or a human error on my part? 😅
it might be because you're using anthropic.claude-sonnet-4-20250514-v1:0 instead of us.anthropic.claude-sonnet-4-20250514-v1:0. The cross-region inference links have the region prefixed us. etc.
@stv-io I don't know why that's happening, are you providing the model through settings.json?
I've used Sonnet 4 through the Bedrock provider in us-east-1 before so it should be fine.
I'll take a peek at this in depth when I get remotely closer to a keyboard.
https://github.com/zed-industries/zed/blob/main/crates%2Fbedrock%2Fsrc%2Fmodels.rs#L449
Yeah looks like the CRIS configuration didn't make it to main.
@maxdeviant / @osyvokon could you take a look at this? Basically the Claude 4 family of models need to be added to the US only match.
#32235 Will solve it.
Amazing, thanks for the follow up!
Same issue we are facing in window version.
Ollama models are not connecting, i press the button multiple time restarted and all.