medusa icon indicating copy to clipboard operation
medusa copied to clipboard

feat(medusa-file-s3): Support for any s3 compatible service

Open ashutoshpw opened this issue 3 years ago • 3 comments

Users can pass the config options in the plugin so that all the parameters could be directly passed to the aws-sdk library. By using this approach, they can also do many other things apart from using just any other s3 compatible service.

ashutoshpw avatar Aug 22 '22 22:08 ashutoshpw

⚠️ No Changeset found

Latest commit: 36d11db936333cc61154206a389b450ab23b4a93

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

changeset-bot[bot] avatar Aug 22 '22 22:08 changeset-bot[bot]

@srindom I have rebased it with develop branch. Should I do anything else for this to merge with develop?

ashutoshpw avatar Aug 25 '22 10:08 ashutoshpw

@shahednasser docs are updated now for this one.

ashutoshpw avatar Aug 26 '22 18:08 ashutoshpw

Hey, any news for this PR? Been there for quite some time and it's overdue...

Thanks!

RegisHubelia avatar Dec 05 '22 20:12 RegisHubelia

@RegisHubelia Sorry for keeping this one hanging. Will attend to it as soon as conflicts are resolved.

@ashutoshpw Any chance you have time to fix the last couple of things, so we can run another round of reviews?

olivermrbl avatar Dec 08 '22 06:12 olivermrbl

@olivermrbl, All good - ended up modifying the current plugin so it works with S3 in general. But as this PR is coming soon, I'll simply go back to the plugin once it is merged.

RegisHubelia avatar Dec 08 '22 12:12 RegisHubelia

@olivermrbl, All good - ended up modifying the current plugin so it works with S3 in general. But as this PR is coming soon, I'll simply go back to the plugin once it is merged.

Hey @RegisHubelia! Do you mind sharing how did you do it? I need to test S3 integration with a non-AWS provider. Thanks!

Charlio99 avatar Dec 11 '22 23:12 Charlio99

yhea sure - here you go, didn't reinvent the wheel here, works with backblaze, didn't actually tried with AWS. The public-read will give you trouble - I need to fix that as it's not the same depending on the S3 provider - but a private bucket on backblaze works fine. ` import fs from "fs" import aws from "aws-sdk" import { AbstractFileService } from "@medusajs/medusa" import stream from "stream"

class S3Service extends AbstractFileService { // eslint-disable-next-line no-empty-pattern constructor({}, options) { super({}, options) options.isProtected = true this.bucket_ = process.env.S3_BUCKET this.s3Url_ = process.env.S3_URL this.accessKeyId_ = process.env.S3_ACCESS_KEY_ID this.secretAccessKey_ = process.env.S3_SECRET_ACCESS_KEY this.region_ = process.env.S3_REGION this.endpoint_ = process.env.S3_ENDPOINT || process.env.S3_URL this.s3_ = new aws.S3({endpoint: this.endpoint_, accessKeyId: this.accessKeyId_, secretAccessKey: this.secretAccessKey_}) }

upload(file) {
    this.updateAwsConfig()

    return this.uploadFile(file)
}

uploadProtected(file) {
    this.updateAwsConfig()

    return this.uploadFile(file, { acl: "private" })
}

uploadFile(file, options = { isProtected: false, acl: undefined }) {
    const s3 = this.s3_
    const params = {
        ACL: options.acl ?? (options.isProtected ? "private" : "public-read"),
        Bucket: this.bucket_,
        Body: fs.createReadStream(file.path),
        Key: `${file.originalname}`
    }

    return new Promise((resolve, reject) => {
        s3.upload(params, (err, data) => {
            if (err) {
                reject(err)
                return
            }

            resolve({ url: data.Location, key: data.Key })
        })
    })
}

async delete(file) {
    this.updateAwsConfig()

    const s3 = this.s3_
    const params = {
        Bucket: this.bucket_,
        Key: `${file}`,
    }

    return new Promise((resolve, reject) => {
        s3.deleteObject(params, (err, data) => {
            if (err) {
                reject(err)
                return
            }
            resolve(data)
        })
    })
}

async getUploadStreamDescriptor(fileData) {
    this.updateAwsConfig()

    const pass = new stream.PassThrough()

    const fileKey = `${fileData.name}.${fileData.ext}`
    const params = {
        ACL: fileData.acl ?? "private",
        Bucket: this.bucket_,
        Body: pass,
        Key: fileKey,
    }

    const s3 = this.s3_
    return {
        writeStream: pass,
        promise: s3.upload(params).promise(),
        url: `${this.s3Url_}/${fileKey}`,
        fileKey,
    }
}

async getDownloadStream(fileData) {
    this.updateAwsConfig()

    const s3 = this.s3_

    const params = {
        Bucket: this.bucket_,
        Key: `${fileData.fileKey}`,
    }

    return s3.getObject(params).createReadStream()
}

async getPresignedDownloadUrl(fileData) {
    this.updateAwsConfig({
        signatureVersion: "v4",
    })

    const s3 = this.s3_

    const params = {
        Bucket: this.bucket_,
        Key: `${fileData.fileKey}`,
        Expires: 320
    }

    return await s3.getSignedUrlPromise("getObject", params)
}

updateAwsConfig(additionalConfiguration = {}) {
    aws.config.setPromisesDependency(null)
    aws.config.update(
      {
          accessKeyId: this.accessKeyId_,
          secretAccessKey: this.secretAccessKey_,
          region: this.region_,
          endpoint: this.endpoint_,
          ...additionalConfiguration,
      },
      true
    )
}

}

export default S3Service

`

RegisHubelia avatar Dec 11 '22 23:12 RegisHubelia

@RegisHubelia I'm getting: Malformed Access Key Id. Did this happened to you? If so, how did you solve it?

Charlio99 avatar Dec 12 '22 10:12 Charlio99

@RegisHubelia I'm getting: Malformed Access Key Id. Did this happened to you? If so, how did you solve it?

I have not... What S3 provider are you using? Did you set the env variables?

S3_BUCKET=thebucket S3_REGION=us-west-002 S3_ACCESS_KEY_ID=theaccesskeyid S3_SECRET_ACCESS_KEY=thesecretaccesskey S3_URL=https://s3.us-west-002.backblazeb2.com

RegisHubelia avatar Dec 12 '22 11:12 RegisHubelia

@RegisHubelia I'm getting: Malformed Access Key Id. Did this happened to you? If so, how did you solve it?

I have not... What S3 provider are you using? Did you set the env variables?

S3_BUCKET=thebucket S3_REGION=us-west-002 S3_ACCESS_KEY_ID=theaccesskeyid S3_SECRET_ACCESS_KEY=thesecretaccesskey S3_URL=https://s3.us-west-002.backblazeb2.com

@RegisHubelia I think I found the issue, the package is not being patched, I'm using npx patch-package medusa-file-s3, and it doesn't seem to be working. How did you "patched" the package?

Charlio99 avatar Dec 12 '22 11:12 Charlio99

@Charlio99, you'll need to create a service and not use the actual package included with medusa. You can have a look here: https://docs.medusajs.com/advanced/backend/services/create-service/

Basically, created a file services/s3.js and paste the code I sent you.

RegisHubelia avatar Dec 12 '22 11:12 RegisHubelia

@Charlio99, you'll need to create a service and not use the actual package included with medusa. You can have a look here: https://docs.medusajs.com/advanced/backend/services/create-service/

Basically, created a file services/s3.js and paste the code I sent you.

@RegisHubelia that did the trick for backblaze, now I'm trying to get it to work with Storj DCS but it's not working, no errors are shown in UI or logs, it just says "succesful" as if the image was uploaded

Charlio99 avatar Dec 12 '22 11:12 Charlio99

It will need debugging... I'm not familiar with storj - but came across this: https://www.storj.io/blog/what-is-s3-compatibility

RegisHubelia avatar Dec 12 '22 11:12 RegisHubelia

It will need debugging... I'm not familiar with storj - but came across this: https://www.storj.io/blog/what-is-s3-compatibility

I know, but it doesn't work out of the box like backblaze with the code you sent. Do you know if there's any option to turn on verbose only on that component?

Charlio99 avatar Dec 12 '22 12:12 Charlio99

Hey, thanks for the PR! Since v2 brought a lot of architectural and API changes on the backend, we will be closing this ticket.

riqwan avatar Jul 05 '24 10:07 riqwan