amplify-hosting
amplify-hosting copied to clipboard
Deployment keeps failing at frontend build
Before opening, please confirm:
- [X] I have checked to see if my question is addressed in the FAQ.
- [X] I have searched for duplicate or closed issues.
- [X] I have read the guide for submitting bug reports.
- [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
App Id
d656n35c0hsyb
Region
us-west-2
Amplify Hosting feature
No response
Describe the bug
I can't tell what is causing it to fail since it has a generic error spawn ENOMEM. I know that there is nothing wrong with the build that I am pushing out since it builds appropriately on a local machine.
I also get the build error on a previously deployed commit when I try to redeploy. This issue started happening yesterday. Below is the error log.
022-07-14T19:27:13.788Z [WARNING]: error spawn ENOMEM 2022-07-14T19:27:13.813Z [INFO]: [0m [0m [0m[97m[41mError[0m[37m[41m:[0m[37m[41m [0m[97m[41mspawn ENOMEM[0m [0m [0m [0m [0m[90m-[0m [0m[93mchild_process[0m[90m:[0m[93m415[0m[37m [0m[37mChildProcess.spawn[0m [0m [0m [0m[90mnode:internal/child_process:415:11[0m [0m [0m [0m [0m[90m-[0m [0m[93mnode:child_process[0m[90m:[0m[93m698[0m[37m [0m[37mspawn[0m [0m [0m [0m[90mnode:child_process:698:9[0m [0m [0m [0m [0m[90m-[0m [0m[93mnode:child_process[0m[90m:[0m[93m167[0m[37m [0m[37mfork[0m [0m [0m [0m[90mnode:child_process:167:10[0m [0m [0m [0m [0m[90m-[0m [0m[93mindex.js[0m[90m:[0m[93m95[0m[37m [0m[37mWorkerPool.startAll[0m [0m [0m [0m[90m[bettermeant]/[gatsby-worker]/dist/index.js:95:46[0m [0m [0m [0m [0m[90m-[0m [0m[93mindex.js[0m[90m:[0m[93m215[0m[37m [0m[37mWorkerPool.restart[0m [0m [0m [0m[90m[bettermeant]/[gatsby-worker]/dist/index.js:215:10[0m [0m [0m [0m [0m[90m-[0m [0m[37mrunMicrotasks[0m [0m [0m [0m [0m[90m-[0m [0m[93mtask_queues[0m[90m:[0m[93m96[0m[37m [0m[37mprocessTicksAndRejections[0m [0m [0m [0m[90mnode:internal/process/task_queues:96:5[0m [0m [0m [0m [0m[90m-[0m [0m[93mbuild.ts[0m[90m:[0m[93m410[0m[37m [0m[37mbuild[0m [0m [0m [0m[90m[bettermeant]/[gatsby]/src/commands/build.ts:410:3[0m [0m [0m [0m 2022-07-14T19:27:14.157Z [WARNING]: error Command failed with exit code 1. 2022-07-14T19:27:14.157Z [INFO]: info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. 2022-07-14T19:27:14.164Z [ERROR]: !!! Build failed 2022-07-14T19:27:14.164Z [ERROR]: !!! Non-Zero Exit Code detected
Expected behavior
It should be passing the deployment with no errors
Reproduction steps
simply pushing a new commit or even deploying a previously deployed commit.
Build Settings
version: 1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- export NODE_OPTIONS=--max-old-space-size=8192
- nvm install 16.5.0
- yarn install
build:
commands:
- export NODE_OPTIONS=--max-old-space-size=8192
- nvm install 16.5.0
- yarn run build
- echo "GATSBY_S3_BUCKET=$GATSBY_S3_BUCKET"
- echo "GATSBY_STRIPE_KEY=$GATSBY_STRIPE_KEY"
artifacts:
baseDirectory: public
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Additional information
No response
I was able to push one build through after trying over 30 times in a 2-day span of trying. This is a serious issue since we are providing healthcare services and need to be able to push updates right away if a bug is reported.
Hi @hhemmati81, thanks for reaching out to us. We can understand the urgency of the issue and we are investigating into it further.
Hi @Jay2113, I haven't experienced any issues for a couple of weeks now. I am not sure if this has been fixed but it seems to be working well now.
Thanks for confirming that @hhemmati81. Can you please send us an email summarizing your issue at [REDACTED]? We will respond to it with next steps.
⚠️COMMENT VISIBILITY WARNING⚠️
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.