aws-lite
aws-lite copied to clipboard
S3: Pipe Readable to PutObject with File param
- [x] Forked the repo and created your branch from
master
- [x] Made sure tests pass (run
npm it
from the repo root) - [ ] Expanded test coverage related to your changes:
- [ ] Added and/or updated unit tests (if appropriate)
- [ ] Added and/or updated integration tests (if appropriate)
- [ ] Updated relevant documentation:
- [ ] Internal to this repo (e.g.
readme.md
, help docs, inline docs & comments, etc.) - [ ] Architect docs (arc.codes)
- [ ] Internal to this repo (e.g.
- [ ] Summarized your changes in
changelog.md
- [x] Linked to any related issues, PRs, etc. below that may relate to, consume, or necessitate these changes
Stream files to PutObject
(See #65)
Basically:
- Don't chunk uploads
- Don't sign payloads
- When there's a
File
arg, usefs.createReadStream(File)
as payload
The AWS SDK doesn't sign payloads by default (s3DisableBodySigning
in v2 defaults to true, applyChecksum
is the v3 option). AFAICT, they only time you need the payload signed is when it's required by bucket policy.
I'm sure there's plenty to discuss here, but I was kinda mystified when I started looking at the wire traffic when using the AWS SDK...there wasn't anything fancy going on at all. There are code paths for signing chunked payloads...but they weren't running. Basically, all the tools needed to make this work were already present.
This first draft is a breaking change:
- If your bucket requires signed payloads, this won't work
- The
MinChunkSize
param is removed because of irrelevance (only TS will care about this really)
I tested this with something like this:
systemd-run --scope -p MemoryMax=50M -p MemorySwapMax=50M --user node ./awesome-script
...that uploaded a 5GB file to a private bucket successfully. So, this works for me AFAICT, I've patched it in one of my projects, and will be running this in the real world.