aws-sdk-js icon indicating copy to clipboard operation
aws-sdk-js copied to clipboard

S3: Domain overwrites custom bucket name

Open agnoam opened this issue 3 years ago • 1 comments

Describe the bug

I'm using the package to push and pull objects (files) from self-hosted s3 blob storage When I'm initializing the package with those details:

Expected Behavior

When I will initialize the package with the parameters:

s3ForcePathStyle: true,
s3BucketEndpoint: false

The domain name will be ignored, and the bucket name will be the custom one.

Current Behavior

When I initialize the package with those parameters:

const client: AWS.S3 = new AWS.S3({
    apiVersion: '2006-03-01',
    region: '',
    s3ForcePathStyle: true,
    s3BucketEndpoint: false,
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
    endpoint: new AWS.Endpoint('http://<domain-name>:<port>')
});

await client.upload({
    Key: <file-name>,
    Bucket: 'UploadsBucket',
    Body: fs.createReadStream(<file-path>)
}).promise()

The bucket the object pushed into is <domain-name>, And the ObjectKey name is UploadsBucket/<file-name>

Reproduction Steps

I have used the fake-s3 docker image, to deploy AWS S3 compatible object storage.

import AWS from 'aws-sdk';

(async () => {
    export const client: AWS.S3 = new AWS.S3({
        apiVersion: '2006-03-01',
        region: "",
        s3ForcePathStyle: true,
        s3BucketEndpoint: false,
        accessKeyId: <access-key>,
        secretAccessKey: <secret-access-key>,
        endpoint: new AWS.Endpoint('http://fake-s3:4569') // On my docker-compose
    });

    await client.upload({
        Key: file.filename,
        Bucket: 'UploadsBucket',
        Body: fs.createReadStream(file.path)
    }).promise();
})()

The created object will be in fake-s3 bucket name.

Possible Solution

No response

Additional Information/Context

No response

SDK version used

"^2.913.0"

Environment details (OS name and version, etc.)

Node.js docker image v16

agnoam avatar Jul 12 '22 18:07 agnoam

Hi @agnoam - apologies for the long wait. The behavior you're seeing is likely due to the way the AWS SDK is handling the s3ForcePathStyle and s3BucketEndpoint options when used in combination with a custom endpoint.

s3ForcePathStyle is used to force the request to use path-style addressing, which is useful for certain scenarios like using non-AWS services or when dealing with bucket names that don't follow the DNS naming conventions.

The s3BucketEndpoint option, when set to true, instructs the SDK to send bucket-level requests to "bucketName.s3Endpoint" instead of "s3Endpoint/bucketName". However, when set to false, it falls back to the default behavior, which is to send requests to "s3Endpoint/bucketName".

In your case, since you're using a custom endpoint (http://fake-s3:4569) and have set s3ForcePathStyle to true and s3BucketEndpoint to false, the SDK is likely treating the custom endpoint as the bucket name (fake-s3) and using path-style addressing with the bucket name you provided (UploadsBucket) as the object key.

Hope that helps, John

aBurmeseDev avatar Aug 21 '24 07:08 aBurmeseDev

This issue has not received a response in 1 week. If you still think there is a problem, please leave a comment to avoid the issue from automatically closing.

github-actions[bot] avatar Sep 01 '24 00:09 github-actions[bot]