deno_registry2
deno_registry2 copied to clipboard
Errors when trying to publish aws_sdk (again)
I tried to publish v3.0.0.1, which resulted in the following error:
https://deno.land/status/5ff429e900e3b4f800ed718d Failed to clone git repository https://github.com/christophgysin/aws-sdk-js-v3 at tag v3.0.0.1
Yet the link on the page seems to work fine: https://github.com/christophgysin/aws-sdk-js-v3/tree/v3.0.0.1/deno/
When trying to redeliver the same webhook event, I'm getting a 500:
X-GitHub-Delivery: 00960ee0-4f34-11eb-992f-e5f713d27210
{"message":"Internal Server Error"}
hey @christophgysin we're looking into the issue, the failed to clone
message is a red herring, there is an error happening prior to that where a file fails to upload to s3. The clone error happens in a subsequent retry from Lambda because the filesystem isn't cleaned up on failure.
So with @lucacasonato 's help we were able to get this stack trace:
2021-01-05T09:57:47.331+01:00 error: Uncaught (in promise) Http: error sending request for url (https://REDACTED.s3.us-east-1.amazonaws.com/aws_sdk/versions/v3.0.0.1/raw/client-codebuild/pagination/ListBuildBatchesForProjectPaginator.ts): connection closed before message completed
2021-01-05T09:57:47.331+01:00 at processResponse (deno:core/core.js:223:11)
2021-01-05T09:57:47.331+01:00 at Object.jsonOpAsync (deno:core/core.js:240:12)
2021-01-05T09:57:47.331+01:00 at async fetch (deno:op_crates/fetch/26_fetch.js:1278:29)
2021-01-05T09:57:47.331+01:00 at async S3Bucket.putObject (<https://deno.land/x/[email protected]/src/bucket.ts>:352:1)
2021-01-05T09:57:47.331+01:00 at async uploadVersionRaw (<file:///var/task/utils/storage.ts>:82:1)
2021-01-05T09:57:47.331+01:00 at async <file:///var/task/api/async/publish.ts>:170:1
I did a bit of research but couldn't find anything meaningful (there was one stack overflow entry that mentioned the same issue, but the resolution was just "it do be like that sometimes, build some retries in your code")
I also wasn't able to reproduce on my staging environment so my theory is that this was a one-off error with the connection to s3. I'll add a cleanup step at the end of the lambda - the Internal Server Error was caused by the file system not being clean on retry. Hopefully that should allow Lambda to retry correctly if the error ever does happen again.
This is good enough for me. One somewhat related question though: I have since published 3.1.0.0 sucessfully. If I were now to redeliver 3.0.0.1, will that show up as the latest version? In other words, is the latest version the highest version according to some algorithm, or simply the last published version?
I think so yes, whenever the publish process runs it sets the "latest" to whatever it is processing right now - I'll open another issue so we can improve how we track version numbers.
Dependent on upstream issue: https://github.com/denoland/deno/issues/9070
This should be fixed now.