amplify-backend
amplify-backend copied to clipboard
Deployment uses too many s3 requests
Environment information
System:
OS: macOS 14.1.2
CPU: (8) arm64 Apple M2
Memory: 151.31 MB / 16.00 GB
Shell: /bin/zsh
Binaries:
Node: 20.11.0 - /usr/local/bin/node
Yarn: undefined - undefined
npm: 10.2.4 - /usr/local/bin/npm
pnpm: undefined - undefined
NPM Packages:
@aws-amplify/backend: 0.12.1
@aws-amplify/backend-cli: 0.11.1
aws-amplify: 6.0.16
aws-cdk: 2.128.0
aws-cdk-lib: 2.128.0
typescript: 5.3.3
AWS environment variables:
AWS_STS_REGIONAL_ENDPOINTS = regional
AWS_NODEJS_CONNECTION_REUSE_ENABLED = 1
AWS_SDK_LOAD_CONFIG = 1
No CDK environment variables
Description
After barely starting to test gen2, my s3 put requests are already at 2000, which is the free tier limit.
I didn't look at the number exactly, but it probably creates more than a hundred request for a full sandbox deployment, which happens quite often, when it can't be hot swapped.
Would it be possible to optimize / reduce the number of requests a sandbox deployment uses? I can't imagine how many requests it would need to create a full fledged app. Thanks.
Hey @hum-n, thank you for reaching out. could you provided us information on the resources present or the operations/changes made in the project?
It's a default create-next-app with a couple test pages for auth and data. No styles or images added. Just using standard auth with some user attributes and simple posts data schema.
Hey @hum-n, thank you for providing this information. Marking this a feature-request for improvements and documenting this behavior. Note: Amplify Gen 2 depends on AWS CDK to build and upload the asset to the CDK managed S3 bucket.
Hi @ykethan ,
I just got notified by email that my Free Tier for S3 for this month is at 85% and I've only been using Gen2 for maybe 3 weeks. I have a Gen2 app with only a schema and auth. I probably changed the schema a few times last week to test different things. Why so many S3 PUT requests for each schema change? To learn Gen2 we need to practice and test and iterate. How can we do this if we run out of free tier with only a few changes. Can this be made a higher priority? I understand that it would be lower priority for AWS as it starts making money as soon as the 2000 PUT requests from the Free Tier are gone but I was hoping that this is not necessarily the approach that AWS is taking when trying to promote a new service (or maybe I'm wrong).
I'm a long way from completing my project and if this is starting to cost money when I haven't even decided that I will definitely go with Amplify, might as well stop now.
Please help or if there is any way, please let us know how to avoid running into the limits so fast.
Thanks!
@ykethan
I have also encountered this issue. I wonder if it is something with my configuration or just how the Amplify deployment works.
If you'd like to take s look at my repo, I can share it with you (it's currently private). Otherwise, I'd love to hear if you have tips for how users can try to optimize the calls to S3. Thanks!
Hi @ykethan ,
I just got notified by email that my Free Tier for S3 for this month is at 85% and I've only been using Gen2 for maybe 3 weeks. I have a Gen2 app with only a schema and auth. I probably changed the schema a few times last week to test different things. Why so many S3 PUT requests for each schema change? To learn Gen2 we need to practice and test and iterate. How can we do this if we run out of free tier with only a few changes. Can this be made a higher priority? I understand that it would be lower priority for AWS as it starts making money as soon as the 2000 PUT requests from the Free Tier are gone but I was hoping that this is not necessarily the approach that AWS is taking when trying to promote a new service (or maybe I'm wrong). I'm a long way from completing my project and if this is starting to cost money when I haven't even decided that I will definitely go with Amplify, might as well stop now. Please help or if there is any way, please let us know how to avoid running into the limits so fast.
Thanks!
Same, still contemplating on using the credit that I got from them.
The same issue is happening on my side as well. Is there any workaround or configuration to limit requests to S3?