swift-aws-lambda-runtime
swift-aws-lambda-runtime copied to clipboard
`AWSLambdaPackager` inconsistently fails
Expected behavior
not failures
Actual behavior
Sometimes fails in GitHub Actions when Penny is trying to deploy the lambdas. A retry is usually enough to get things working. Penny doesn't use any caching in CI so it's just 2 different runs can end up with different results.
Steps to reproduce
Example CI run: https://github.com/vapor/penny-bot/actions/runs/11441929639/job/31968609354
Failure logs (first try): logs_29885706142.zip Success logs (second try): logs_29885706142.1.zip
This is what the CI is doing:
for name in ${{ steps.find_package_names.outputs.names }}; do
swift package archive \
--output-path ./zips \
--products "${name}"
aws s3api put-object \
--bucket penny-lambdas-store \
--key "${name}.zip" \
--body ./zips/${name}/${name}.zip
done
If possible, minimal yet complete reproducer code (or URL to code)
No response
What version of this project (swift-aws-lambda-runtime) are you using?
1.0.0-alpha.3
Swift version
swift:6.0-amazonlinux2
Amazon Linux 2 docker image version
No response
Hey @MahdiBM . I'm sorry you're experiencing such intermittent error.
From what I understand, your CI runs in a swift:6.0:amazonlinux2 container and uses the 1.0.0-alpha.3 branch of the runtime.
Is this correct ?
Did you observe this behaviour with Swift 5.x also ?
Would it be possible to submit another trace produced with the --verbose flag ?
swift package archive --verbose \
--output-path ./zips \
--products "${name}"
Hey @MahdiBM . I'm sorry you're experiencing such intermittent error. Can you try with the new 2.0.0-beta.1 release. There is no change in the archiver plugin but compiling with the 6.x toolchain might solve this issue.
@sebsto As a reminder, I think we talked about this a while ago on the OpenSource Slack and i did mention that I have enabled the verbose flag and I'll be looking out for any new failures to report.
The thing is, no failures ever happened since then. So not sure, maybe this issue got resolved somehow, unintentionally? The issue might have also been somewhere in the toolchains since I've been bumping into some of those, specifically on Amazon Linux 2 images.
We have already moved to aws lambda v2 (the main branch, a few months ago), so that's also already done. And again, I haven't seen this issue even before we move to aws lambda v2.
Thank you for the details. I'm closing this now. Feel free to reopen with more details if you are still affected.