GetObject for s3 object lambda access point results in invalid inputS3Url value
Checkboxes for prior research
- [x] I've gone through Developer Guide and API reference
- [x] I've checked AWS Forums and StackOverflow.
- [x] I've searched for previous similar issues and didn't find any solution.
Describe the bug
s3 GetObject calls to s3 object lambda access points are failing with 403 ERR_BAD_REQUEST starting with v3.729.0 due to event.getObjectContext.inputS3Url having a X-Amz-SignedHeaders value of host%3Bx-amz-checksum-mode. Similar calls to the same s3 object lambda access points succeed when made using getSignedUrl from @aws-sdk/s3-request-presigner with the same GetObjectCommand because X-Amz-SignedHeaders still has a value of host.
Regression Issue
- [x] Select this option if this issue appears to be a regression.
SDK version number
@aws-sdk/[email protected]
Which JavaScript Runtime is this issue in?
Node.js
Details of the browser/Node.js/ReactNative version
v20.19.0
Reproduction Steps
- Create an s3 object lambda access point that processes
inputS3Url(e.g. with axios) - Make a
GetObjectcall to valid object for the access point -
inputS3Urlwill haveX-Amz-SignedHeadersvalue ofhost%3Bx-amz-checksum-mode - The axios call should fail with
403 ERR_BAD_REQUEST
// s3 object lambda access point code
const { S3Client, WriteGetObjectResponseCommand } = require('@aws-sdk/client-s3');
const axios = require('axios').default;
const s3 = new S3Client();
exports.handler = async (event) => {
const fileUrl = event.getObjectContext.inputS3Url;
const requestRoute = event.getObjectContext.outputRoute;
const requestToken = event.getObjectContext.outputToken;
try {
await s3.send(new WriteGetObjectResponseCommand({
RequestRoute: requestRoute,
RequestToken: requestToken,
Body: (await axios.get(fileUrl, { responseType: 'stream' })).data,
ContentType: 'application/json'
}));
} catch (e) {
console.log(error);
throw error;
}
};
// calling code
const { S3Client, GetObjectCommand } = require('@aws-sdk/client-s3');
const response = await s3.send(new GetObjectCommand({
Bucket: s3ObjectLambdaAccessPointArn,
Key: key
}));
Observed Behavior
In object lambda access point, fails with:
{
"message": "Request failed with status code 403",
"name": "AxiosError",
"stack": "AxiosError: Request failed with status code 403\n at settle (/var/task/index.js:50819:16)\n at IncomingMessage.handleStreamEnd (/var/task/index.js:51636:15)\n at IncomingMessage.emit (node:events:536:35)\n at endReadableNT (node:internal/streams/readable:1698:12)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\n at Axios.request (/var/task/index.js:52427:45)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async exports.handler (/var/task/index.js:63609:26)",
"config": {
"transitional": {
"silentJSONParsing": true,
"forcedJSONParsing": true,
"clarifyTimeoutError": false
},
"adapter": [
"xhr",
"http",
"fetch"
],
"transformRequest": [
null
],
"transformResponse": [
null
],
"timeout": 0,
"xsrfCookieName": "XSRF-TOKEN",
"xsrfHeaderName": "X-XSRF-TOKEN",
"maxContentLength": -1,
"maxBodyLength": -1,
"env": {},
"headers": {
"Accept": "application/json, text/plain, */*",
"range": "bytes=0-0",
"User-Agent": "axios/1.8.2",
"Accept-Encoding": "gzip, compress, deflate, br"
},
"responseType": "arraybuffer",
"method": "get",
"url": "https://<access-point>.s3-accesspoint.us-east-2.amazonaws.com/<file-name>?X-Amz-Security-Token=<security-token>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250623T211144Z&X-Amz-SignedHeaders=host%3Bx-amz-checksum-mode&X-Amz-Expires=61&X-Amz-Credential=<key>%2F20250623%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Signature=<signature>",
"allowAbsoluteUrls": true
},
"code": "ERR_BAD_REQUEST",
"status": 403
}
Expected Behavior
We expected the calls to succeed as they did before this update.
Possible Solution
The internal call to the s3 object lambda access point should be made with a presigned url with the original X-Amz-SignedHeaders value of host.
Additional Information/Context
Calls do succeed when s3 is initialized with responseChecksumValidation set to 'WHEN_REQUIRED', but it seems like this shouldn't be a required workaround for working with a standard AWS service such as s3 object lambda access point.
Hi @jaube-litify - thanks for reaching out!
Back in January, we adopted S3 default integrity change starting on the version of v3.729.0 where clients now default to enabling an additional checksum on all Put calls and enabling validation on Get calls.
Although disabling this new default integrity protections by S3 is not recommended, it can be done by setting value WHEN_REQUIRED for get calls in:
- client configuration:
responseChecksumValidation - shared ini:
response_checksum_validation - environment variable:
AWS_RESPONSE_CHECKSUM_VALIDATION
Regarding your specific error, you mentioned the calls work when checksum validation is disabled, but seeing an AxiosError is unexpected in this context. Here are my recommendations:
- Update to the latest AWS SDK version
- Consider switching from Axios to Node's native fetch API
Let me know if issue persists! For more details about this data integrity change, see this blog post: https://aws.amazon.com/blogs/aws/introducing-default-data-integrity-protections-for-new-objects-in-amazon-s3/
@aBurmeseDev I'm getting a similar error with fetch and the latest AWS SDK version (3.837.0):
{
"errorType": "Error",
"errorMessage": "Failed to fetch: Forbidden",
"stack": [
"Error: Failed to fetch: Forbidden",
" at exports.handler (/var/task/index.js:50740:37)",
" at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
]
}
@aBurmeseDev just wanted to check and see if there was an update on this. As mentioned in the previous comment, the issue persists after updating to the latest AWS SDK and using fetch. Would prefer not to have to disable the new default integrity protections by S3, but right now that's our only working option.
@kuhe saw that you've responded to some other issues recently. Are you able to look at this one?