amplify-js icon indicating copy to clipboard operation
amplify-js copied to clipboard

Using public s3 URL from Storage.get

Open Dongw1126 opened this issue 3 years ago • 29 comments

Is this related to a new or existing framework?

No response

Is this related to a new or existing API?

Storage

Is this related to another service?

S3

Describe the feature you'd like to request

I want to get only public object URL when I use Storage.get. Currently, I receive only signed URL.

I'm going to use S3 as an image storage that all users can write and read. However, because of the signed URL, the URL changes every time users refresh, so image caching doesn't work and resources are wasted.

Describe the solution you'd like

I'd like to get a public object URL without signature from Storage.get. I wish amplify would provide an option for public objects.

Describe alternatives you've considered

It would be nice if the signed URL doesn't change on every refresh.

Additional context

No response

Is this something that you'd be interested in working on?

  • [ ] 👋 I may be able to implement this feature request
  • [ ] ⚠️ This feature might incur a breaking change

Dongw1126 avatar Dec 31 '21 18:12 Dongw1126

related to https://github.com/aws-amplify/amplify-js/issues/960

jamesaucode avatar Jan 05 '22 18:01 jamesaucode

Having the exact same issue. I've tried to use the image URL directly but I regularly get 403s unless the entire bucket is public

mdarche avatar Jan 12 '22 16:01 mdarche

related to #6935

stocaaro avatar Jun 01 '22 21:06 stocaaro

same here.

ilia-luk avatar Jun 05 '22 21:06 ilia-luk

It's been a while, @nadetastic.

Any progress on this?

kimfucious avatar Jul 01 '23 08:07 kimfucious

Hi @kimfucious - we are working on this feature, and have identified a couple of options for moving forward. We will provide updates on this ticket when we have timelines figured out!

abdallahshaban557 avatar Jul 03 '23 15:07 abdallahshaban557

Hi @abdallahshaban557,

Thanks for the follow up.

Something to consider:

As it stands, we need to do a const s3Url = await Storage.get(imageKey) to get the URL for an image in storage.

As far as I can tell, there is no way for the browser to cache these, which results in unnecessary fetches. I could be wrong.

I wound up creating a caching mechanism to deal with this, but--to be blunt--we really shouldn't have to jump through such hoops.

I'll share it here, in case someone comes across this thread and might find use for it.

import React, { createContext, useContext, useEffect } from "react";
import { Storage } from "aws-amplify";
import chalk from "chalk";
import config from "../../config/config.json"

const isDebug = config.site.IS_DEBUG;
const isDev = process.env.NODE_ENV === "development";

interface ImageCacheContextType {
    getImageWithCache: (id: string, imageUrl: string) => Promise<string>;
}

const ImageCacheContext = createContext<ImageCacheContextType | null>(null);

export const useImageCache = (): ImageCacheContextType => {
    const context = useContext(ImageCacheContext);
    if (!context) {
        throw new Error(
            "useImageCache must be used within an ImageCacheProvider"
        );
    }
    return context;
};

interface ImageCacheProviderProps {
    children: React.ReactNode;
}

const imageCache: Record<string, string> = {};

export default function ImageCacheProvider({
    children,
}: ImageCacheProviderProps): JSX.Element {
    async function getImageWithCache(
        id: string,
        imageKey: string
    ): Promise<string> {
        if (imageCache[id]) {
            if (isDebug || isDev) {
                console.log(chalk.green("Image cache hit!"));
            }
            return imageCache[id];
        } else {
            if (isDebug || isDev) {
            console.log(chalk.yellow("Image cache miss!"));
            }
            const s3Url = await Storage.get(imageKey);
            const response = await fetch(s3Url);
            const blob = await response.blob();
            const objectUrl = URL.createObjectURL(blob);
            imageCache[id] = objectUrl;
            return objectUrl;
        }
    }

    useEffect(() => {
        return () => {
            for (const id in imageCache) {
                if (imageCache.hasOwnProperty(id)) {
                    URL.revokeObjectURL(imageCache[id]);
                }
            }
        };
    }, []);

    const contextValue: ImageCacheContextType = {
        getImageWithCache,
    };

    return (
        <ImageCacheContext.Provider value={contextValue}>
            {children}
        </ImageCacheContext.Provider>
    );
}

I look forward to a solution that allows us to store a non-expiring URLs somewhere.

kimfucious avatar Jul 03 '23 15:07 kimfucious

Hi @kimfucious - As part of enabling this, we are also considering providing you with an ability to cache all your content behind a CDN such as Cloudfront. So we are thinking of it as well!

abdallahshaban557 avatar Jul 03 '23 16:07 abdallahshaban557

Great to hear @abdallahshaban557.

Kindly consider that we're not all hosted on Amazon.

While I have apps that are, this particular app is hosted on Vercel.

kimfucious avatar Jul 03 '23 16:07 kimfucious

Hi @abdallahshaban557,

Here's another scenario to consider:

<meta property="og:image" content="https://myapp-storage.s3.us-region-2.amazonaws.com/images/thumb-123.png">

kimfucious avatar Jul 04 '23 08:07 kimfucious

@kimfucious - just to make sure I understand, can you elaborate on that please?

abdallahshaban557 avatar Jul 05 '23 17:07 abdallahshaban557

HI @abdallahshaban557,

Thanks for the follow-up. I realize my comment was a bit vague in hindsight. Let me try to clarify.

We are talking about non-expiring URLs that are retrieved via the Storage.get(key,{options}) pattern.

If we are in a Next.js app, and we are dynamically generating routes/pages, and we want those routes/pages to have an og:image meta tag, we want a URL that does not expire.

We can use the NPM package next-seo to populate a meta tag like the one below if we have a non-expiring URL.

<meta property="og:image" content="https://<non-expiring-url-to-article-image>">

The easiest way to do this would be--I could be wrong--when the image is uploaded to Storage, we can get a non-expiring URL, using Storage.get() right after the PUT, and save that somewhere (e.g. in a db).

Another way would be to handle this in something like getStaticProps/getServerSideProps.

Or it could be a little of both, regardless...

As it stands, any URLs we put in meta tags with the above methods will expire. If someone sends a link to a page with an expired URL, the preview is image broken, and Google search results will not show the preview image either.

I know that I could just create a public S3 bucket and not be hindered by such things, but I'm trying to work within the Amplify way, assuming these are best practices.

@dabit3 is a master at this stuff, so he may have some ideas.

kimfucious avatar Jul 05 '23 18:07 kimfucious

Hi @kimfucious - that makes sense! We really appreciate all this feedback!

abdallahshaban557 avatar Jul 05 '23 19:07 abdallahshaban557

My app is also dependent upon this, in a similar way to @kimfucious: I have avatars for users which are stored in my S3 Amplify Storage in a bucket that has public access. I need to pass the URLs for these avatars to a 3rd party service (integrated into my app) which stores them in their database. Because the URLs expire so quickly, the avatars do not work. Rather than make my bucket publicly-facing, I would like to get non-expiring URLs from Storage.get().

So, @abdallahshaban557 I would like to confirm: is your upcoming solution (or part of it) that we'll be able to get the public URL from S3 for a resource that is publicly facing?

I'm asking because @kimfucious said "I know that I could just create a public S3 bucket [...]" and I'm wondering if that will be "the Amplify way" as part of your solution?

DarylBeattie avatar Jul 17 '23 18:07 DarylBeattie

Hi, @DaryBeattie,

I'm asking because @kimfucious said "I know that I could just create a public S3 bucket [...]" and I'm wondering if that will be "the Amplify way" as part of your solution?

While I can't respond for the Amplify team, I'll my two cents here for clarity:

  1. I already have a public-facing s3 bucket. My app started off this way before adding Amplify Storage.
  2. After implementing Amplify Storage, I noticed the problem of expiring URLs.
  3. As this seriously affected the UI/UX of my app, I kept the public-facing s3 bucket intact and reverted the Amplify Storage implementation.
  4. I personally don't think this is the "Amplify way," but as it stands, it's the only way until this fix is implemented.
  5. When the fix is implemented, we should be able to use an s3 bucket with Amplify without having to worry about these pesky expiring URLs.
  6. I am hopeful that the images will be publically accessible so that they can be used in things like SEO meta tags and Google structured data.

Kindly confirm, @abdallahshaban557.

kimfucious avatar Jul 18 '23 18:07 kimfucious

Hi @kimfucious and @DaryBeattie - we actually want to support a solution that works for both! So you either can use a public S3 bucket that we can then enable through Amplify Storage, or if you are an existing Amplify Storage developer - giving you a new prefix that creates long lived public facing URLs.

Let me know your thoughts!

abdallahshaban557 avatar Jul 18 '23 18:07 abdallahshaban557

Hi @abdallahshaban557,

Thanks for the follow up.

A solution that handles both sounds great!

I hope this happens soon.

kimfucious avatar Jul 19 '23 17:07 kimfucious

Hi @kimfucious and @DaryBeattie - we actually want to support a solution that works for both! So you either can use a public S3 bucket that we can then enable through Amplify Storage, or if you are an existing Amplify Storage developer - giving you a new prefix that creates long lived public facing URLs.

Let me know your thoughts!

Please add this feature, it will be very useful!

asawyers avatar Aug 30 '23 21:08 asawyers

One more usecase for having public s3 URL from Storage.get: The signed URLs with access tokens are way to long. It's breaking Stripe's image URL length limit (2048 chars). As things stand, Amplify does not support sending image URL to Stripe.

StripeInvalidRequestError: Invalid URL: URL must be 2048 characters or less.

jackshi0912 avatar Sep 01 '23 10:09 jackshi0912

Hi @abdallahshaban557, in the meantime this is getting fundamentally fixed, is it possible to increase the max value for the expires attribute to some higher number? Currently, the max value is 1 hour but something significantly higher can be helpful.

await Storage.get(imageKey, { expires: 86400 });

EDIT: No longer needed in favor of the workaround I posted below

yunchanpaik avatar Sep 14 '23 07:09 yunchanpaik

FWIW, here's the workaround I did until this feature gets implemented. Somewhat hacky but is forward compatible once the actual feature arrives.

  1. Make the public folder of the Amplify Storage S3 bucket publicly accessible. (urgh... but should have the same side effect as creating a separate public facing S3 bucket)

  2. Create a custom plugin for Storage that returns a non-signed url of the asset.

import { Storage, StorageProvider } from "@aws-amplify/storage";

const BUCKET_NAME = "yourbucketname";
const REGION = "your region";

class TempStorageProvider implements StorageProvider {
  static category = "Storage";
  static providerName = "TempStorage";

  get(key: string): Promise<String> {
    const url = `https://${BUCKET_NAME}.s3.${REGION}.amazonaws.com/public/${key}`;
    return new Promise((res) => res(url));
  }

  getCategory(): string {
    return TempStorageProvider.category;
  }

  getProviderName(): string {
    return TempStorageProvider.providerName;
  }

  configure = Storage.configure;
  put = Storage.put;
  remove = Storage.remove;
  list = Storage.list;
}

Storage.addPluggable(new TempStorageProvider());
  1. Then, you can get the file URL by using the same Storage.get api with extra config
const url = await Storage.get(key, { provider: "TempStorage" });

Once the feature arrives, you can simply do the following steps to make it "properly" work:

  1. Block public access to the Amplify Storage S3 bucket
  2. Remove the provider config when calling Storage.get

yunchanpaik avatar Sep 14 '23 09:09 yunchanpaik

I am also looking for solution for this scenario. Any update on this @abdallahshaban557 regarding the timeline?

himanshugupta0007 avatar Oct 02 '23 15:10 himanshugupta0007

Hello @himanshugupta0007 - we do not have an update yet. We will keep this issue updated once we have made more progress in this area.

abdallahshaban557 avatar Oct 25 '23 17:10 abdallahshaban557

Any updates here? Also hoping to get a nonexpiring URL for some of my bucket content.

@abdallahshaban557 ?

nprabala avatar Nov 16 '23 22:11 nprabala

Hey guys, any progress on this one? Having the same issue on a Nuxt App. I want to use S3 images on og:image tags

Los avatar Mar 25 '24 20:03 Los

Hello, any updates on this?

Also, is there a workaround for this using Coudfront?

lafeer avatar Jul 20 '24 10:07 lafeer

Hello everyone, has there been any progress on this issue yet? Could you give us an update please?

michaelkroll avatar Oct 13 '24 07:10 michaelkroll

+1 for this issue.

I spent a lot of time trying to figure out why my image wasn't loading from my bucket using the Amplify Cognito picture field and Aplify storage to write the image to a S3 bucket. My storage/resource.ts is set to allow authenticated so it should really be possible to upload the image to S3 and use the identity pool to validate the user has access to the bucket object. I guess an alternative is to resize the image to < 64k and keep in a DynamoDB table.

Watching this thread for updates.

craigxgibbons avatar May 11 '25 18:05 craigxgibbons

For the record, I've moved more than one project off AWS to Firebase chiefly due to this issue and the fact that there has been zero movement on this issue since Oct 2023.

Just sayin'...

@abdallahshaban557

kimfucious avatar May 18 '25 12:05 kimfucious

Hello, is there any update on this? Im unable to fetch files after using cdn together with s3 bucket.

Locaided avatar Sep 01 '25 13:09 Locaided