Announcement: S3 default integrity change
In AWS SDK for JavaScript v3.729.0, we released changes to the S3 client that adopts new default integrity protections. For more information on default integrity behavior, please refer to the official SDK documentation. In SDK releases from this version on, clients default to enabling an additional checksum on all Put calls and enabling validation on Get calls.
You can disable default integrity protections for S3. We do not recommend this because checksums are important to S3 integrity posture. Integrity protections can be disabled by setting the config flag to WHEN_REQUIRED, or by using the related AWS shared config file settings or environment variables.
Disclaimer: the AWS SDKs and CLI are designed for usage with official AWS services. We may introduce and enable new features by default, such as these new default integrity protections, prior to them being supported or otherwise handled by third-party service implementations. You can disable the new behavior with the WHEN_REQUIRED value for the request_checksum_calculation and response_checksum_validation configuration options covered in Data Integrity Protections for Amazon S3.
This breaks support for Cloudflare R2's S3 API, which returns 501: NotImplemented: Header 'x-amz-checksum-crc32' with value (***) not implemented on any object uploading request on v3.729.0.
Completely unusable with Cloudflare R2 since this release. How can this be disabled directly in js (without env vars or file settings)?
The AWS SDK for JavaScript is designed to work with AWS services - in this case Amazon S3.
As mentioned in the original post, we recommend not to disable default data integrity protections with Amazon S3.
But if you want to disable them for some reason, it can be done by setting value WHEN_REQUIRED for:
- put calls in:
- client configuration:
requestChecksumCalculation - shared ini:
request_checksum_calculation - environment variable:
AWS_REQUEST_CHECKSUM_CALCULATION
- client configuration:
- get calls in:
- client configuration:
responseChecksumValidation - shared ini:
response_checksum_validation - environment variable:
AWS_RESPONSE_CHECKSUM_VALIDATION
- client configuration:
When using the "@aws-sdk/lib-storage" I in the browser I always get the following error when uploading a large file.
InvalidRequest: The upload was created using a crc32 checksum. The complete request must include the checksum for each part. It was missing for part 1 in the request.
Hey @Mette1982 ,
It works on my end -
import { Upload } from "@aws-sdk/lib-storage";
import { S3Client, ChecksumAlgorithm } from "@aws-sdk/client-s3";
import { readFile } from "fs/promises";
async function uploadLargeFile() {
try {
const fileContent = await readFile("./large-file.txt"); // file is 10MB
const parallelUploads3 = new Upload({
client: new S3Client({
region: "us-east-1",
}),
params: {
Bucket: "new-bucket-maggie-ma",
Key: "large-file.txt",
Body: fileContent,
// Ensure proper content type
ContentType: "text/plain",
ChecksumAlgorithm: ChecksumAlgorithm.CRC32,
},
// part size 5MB
partSize: 1024 * 1024 * 5,
queueSize: 1,
leavePartsOnError: true,
});
// Log upload progress
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(`Uploaded ${progress.loaded} of ${progress.total} bytes`);
});
await parallelUploads3.done();
console.log("File uploaded successfully");
} catch (error) {
console.error("Upload error:", error);
// Log more details about the error
if (error.message) console.error("Error message:", error.message);
if (error.code) console.error("Error code:", error.code);
}
}
uploadLargeFile();
Result I got -
Uploaded 5242880 of 10485760 bytes
Uploaded 10485760 of 10485760 bytes
File uploaded successfully
Please check the version of the @aws-sdk/client-s3 in node modules and package-lock.json and confirm the version for me. If you are on older version, then it might not be related to this release. If you are on the latest version, please kindly share your minimal code reproduction.
Thanks!
@trivikr Right but this library doesn't live in a vacuum and is used extensively outside of AWS. This was poorly handled on your part and resulted in a ton of important major services breaking. This should have been released as a major breaking change, not a minor point release.
It broke access to DigitalOcean Spaces (https://status.digitalocean.com/incidents/zbrpd3j7hrrd), Cloudflare R2 and Min.io just to name a few.
This wasted 3 hours of my time. Imagine testing things in production at the AWS / CloudFlare scale. Disappointing. Had to dig through some random forum thread to even know this is happening. And people there said they were just as blindsided and found out through some Discord conversation.
The only workaround I found so far for CloudFlare R2 for anyone dealing with this was to
A) Pin the dependency to a fixed older version like 3.712.0 (since package managers may use a higher minor dependency even if you do ^3.712.0 for example).
"@aws-sdk/client-s3": "3.712.0",
"@aws-sdk/s3-request-presigner": "3.712.0",
B) Remove the headers manually from the AWS uploader.
import {
DeleteObjectCommand,
PutObjectCommand,
S3Client,
S3ClientConfig,
} from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
const s3Client = new S3Client({
region: "auto",
endpoint: `https://${process.env.CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.CLOUDFLARE_ACCESS_KEY_ID,
secretAccessKey: process.env.CLOUDFLARE_SECRET_ACCESS_KEY,
},
requestChecksumCalculation: "WHEN_REQUIRED",
} as S3ClientConfig);
s3Client.middlewareStack.add(
(next) =>
async (args): Promise<any> => {
const request = args.request as RequestInit;
// Remove checksum headers
const headers = request.headers as Record<string, string>;
delete headers["x-amz-checksum-crc32"];
delete headers["x-amz-checksum-crc32c"];
delete headers["x-amz-checksum-sha1"];
delete headers["x-amz-checksum-sha256"];
request.headers = headers;
Object.entries(request.headers).forEach(
// @ts-ignore
([key, value]: [string, string]): void => {
if (!request.headers) {
request.headers = {};
}
(request.headers as Record<string, string>)[key] = value;
}
);
return next(args);
},
{ step: "build", name: "customHeaders" }
);
Hope this helps!
The AWS SDK for JavaScript is designed to work with AWS services - in this case Amazon S3.
As mentioned in the original post, we recommend not to disable default data integrity protections with Amazon S3. But if you want to disable them for some reason, it can be done by setting value
WHEN_REQUIREDfor:* put calls in: * client configuration: `requestChecksumCalculation` * shared ini: `request_checksum_calculation` * environment variable: `AWS_REQUEST_CHECKSUM_CALCULATION` * get calls in: * client configuration: `responseChecksumValidation` * shared ini: `response_checksum_validation` * environment variable: `AWS_RESPONSE_CHECKSUM_VALIDATION`
This solution does not work for DELETE calls, which, for some reason, still include a checksum header even when both are "WHEN_REQUIRED" in the client configuration. How can I address this without sketchy header-cleaning middleware?
Hey @Mette1982 ,
It works on my end -
import { Upload } from "@aws-sdk/lib-storage"; import { S3Client, ChecksumAlgorithm } from "@aws-sdk/client-s3"; import { readFile } from "fs/promises";
async function uploadLargeFile() { try { const fileContent = await readFile("./large-file.txt"); // file is 10MB
const parallelUploads3 = new Upload({ client: new S3Client({ region: "us-east-1", }), params: { Bucket: "new-bucket-maggie-ma", Key: "large-file.txt", Body: fileContent, // Ensure proper content type ContentType: "text/plain", ChecksumAlgorithm: ChecksumAlgorithm.CRC32, }, // part size 5MB partSize: 1024 * 1024 * 5, queueSize: 1, leavePartsOnError: true, }); // Log upload progress parallelUploads3.on("httpUploadProgress", (progress) => { console.log(`Uploaded ${progress.loaded} of ${progress.total} bytes`); }); await parallelUploads3.done(); console.log("File uploaded successfully");} catch (error) { console.error("Upload error:", error); // Log more details about the error if (error.message) console.error("Error message:", error.message); if (error.code) console.error("Error code:", error.code); } }
uploadLargeFile(); Result I got -
Uploaded 5242880 of 10485760 bytes Uploaded 10485760 of 10485760 bytes File uploaded successfullyPlease check the version of the
@aws-sdk/client-s3in node modules andpackage-lock.jsonand confirm the version for me. If you are on older version, then it might not be related to this release. If you are on the latest version, please kindly share your minimal code reproduction.Thanks!
This works indeed perfectly on a NodeJs environment but if I execute the same in a NextJs (executed in the browser) app for instance it no longer works.
I created a small NextJs app to demonstrate my issue using the code sample you provided => https://github.com/Mette1982/aws-s3-error
You only have to change values in the https://github.com/Mette1982/aws-s3-error/blob/main/src/app/upload.tsx file.
@Mette1982 I can reproduce this issue using the next js app you provided.
Created a separate bug report for this issue - https://github.com/aws/aws-sdk-js-v3/issues/6818 CC @Mette1982
This solution does not work for DELETE calls, which, for some reason, still include a checksum header even when both are "WHEN_REQUIRED" in the client configuration.
This issue will be discussed in https://github.com/aws/aws-sdk-js-v3/issues/6819 cc @OIRNOIR
This breaks support for backblaze b2 storage
InvalidArgument: Unsupported header 'x-amz-checksum-crc32' received for this API call.
In package.json we had "@aws-sdk/client-s3": "^3.645.0", so it did auto-upgrade in our build pipeline, then the code broke.
Took me a while to figure out what had happened. Googling this error message didn't bring me to this announcement here, and I thought the package versioning followed semver so I didn't suspect a MINOR version upgrade would cause behaviour change.
Since this is a package used by many systems and many of them had auto-upgrade enabled, we need to think of a way to do this better.
This breaks support for backblaze b2 storage
InvalidArgument: Unsupported header 'x-amz-checksum-crc32' received for this API call.
In
package.jsonwe had"@aws-sdk/client-s3": "^3.645.0", so it did auto-upgrade in our build pipeline, then the code broke.Took me a while to figure out what had happened. Googling this error message didn't bring me to this announcement here, and I thought the package versioning followed semver so I didn't suspect a MINOR version upgrade would cause behaviour change.
Since this is a package used by many systems and many of them had auto-upgrade enabled, we need to think of a way to do this better.
I agree. In my opinion, it was very unprofessonal to include such a breaking change as a minor version upgrade. The maintainers of this library are fully aware that their SDK is used for different types of cloud storage services, not just their first-party Amazon Web Services, so they should have at least announced this change as breaking for third-party services that might not implement support for the x-amz-checksum-crc32 header.
I fully understand that this repository is primarily targeted to serve AWS customers using first-party AWS, but that doesn't waive even the common courtesy of acknowledging that a particular change may be breaking for some people (which is the definition of a breaking change—breaking changes don't always affect everyone, but they potentially affect some people).
That said, I wouldn't complain about this release as much if it weren't for the fact that Integrity protections CANNOT be fully disabled for every type of request.
The official client options to disable the new behavior are not fully complete. Setting both to WHEN_REQUIRED allows for uploading and downloading content, but deleting still sends checksum headers. My attempt to report this issue (#6819) was closed because the new behavior doesn't an error for AWS customers, but the fact remains that there may be valid reasons for AWS customers to want to completely disable the new behavior, even if checksums technically don't evoke a 501 response for them.
And, it seems to have been the intent of the maintainers to include a way to disable the new behavior, at least for now, via the request_checksum_calculation and response_checksum_validation configuration options. This is admirable, however these options are insufficient to fully disable the breaking changes and leave object deleting options (among potentially others I haven't tested) still broken. I personally believe AWS should commit to one way or the other: Either allow us an official mechanism to disable this new behavior completely, or don't provide us any choice at all.
The choice is yours, Amazon.
AWS SDK for JavaScript is designed to work with AWS services. If you're using a different cloud service, please refer to their documentation on compatibility.
This issue was created for a pinned announcement for discoverability. If you use AWS SDK for JavaScript with Amazon S3 and are impacted, please create a new feature request or a bug report.
Hi,
Please see the documentation here on how to use MD5 checksums for S3 in the AWS SDK for JavaScript v3.