storage icon indicating copy to clipboard operation
storage copied to clipboard

Signed urls for upload

Open etiennedupont opened this issue 3 years ago • 6 comments

Feature request

Is your feature request related to a problem? Please describe.

At Labelflow, we developed a tool to upload images on our Supabase storage, based on one nextJs API route. The goal is for us to abstract the storage method from the client-side by querying a generic upload route to upload any file and to ease the permission management. Indeed, in the server-side function, one service role Supabase client is manipulated to actually make the upload. We use next-auth to secure the route (and to manage authentication in the app in general).

Client-side upload looks like that:

await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "PUT",
                  body: file,
                });

Server-side API route looks more or less like that (I don't show the permission management part):

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.put(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { file } = req;
  const { error } = await client.storage.from(bucket).upload(key, file.buffer, {
    contentType: file.mimetype,
    upsert: false,
    cacheControl: "public, max-age=31536000, immutable",
  });
  if (error) return res .status(404);
  return res.status(200);
});

export default apiRoute;

The problem is that we face a serious limitation in terms of upload size since we use Vercel for deployment which doesn't allow serverless functions to handle requests that are more than 5Mb. Since we send over images in the upload request from the client to the server, we're likely to reach that limit quite often.

Describe the solution you'd like

As we don't want to manipulate Supabase clients client-side, we think that the ideal solution would be to allow us to upload directly to Supabase, using an upload signed URL. The above upload route could now take only a key as an input and return a signed URL to make the upload to.

Client-side upload would now be in two steps:

// Get Supabase signed Url
const { signedUrl } = await (await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "GET",
                })).json();

// Upload the file
await fetch(signedUrl, {
                  method: "PUT",
                  body: file,
                });

And our API route would look like that, more or less:

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.get(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { signedURL } = await client.storage
    .from(bucket)
    .createUploadSignedUrl(key, 3600); // <= this is the missing feature

  if (signedURL) {
    res.setHeader("Content-Type", "application/json");
    return res.status(200).json({signedURL});
  }

  return res.status(404);
});

export default apiRoute;

Describe alternatives you've considered

I described them in our related issue:

  • Use the Supabase client, client-side. We'd need to give extra care about security, and the fact we don’t use Supabase auth is not helping. Some references to do that: https://supabase.io/docs/guides/auth#row-level-security & https://github.com/supabase/supabase/tree/master/examples/nextjs-ts-user-management#postgres-row-level-security
  • Use another storage method (Google Cloud, AWS) that supports upload SignedUrl, in an end-to-end fashion instead of Supabase
  • Use another storage method (Google Cloud, AWS) that supports upload SignedUrl as an intermediate. E.g. upload on a GC bucket then download it from the API serverless function and finally upload on Supabase using the client

Additional context

We're happy to work on developing this feature at Labelflow if you think this is the best option!

etiennedupont avatar Nov 19 '21 09:11 etiennedupont

I have the same issue for https://capgo.app i allow users to upload from my CLI with a apikey, so not logged in in the CLI. my current solution is to split the file in chuck of 1mb to upload in loop and edit the file in storage but it often fail for big files: https://github.com/Cap-go/capgo-cli/issues/12

riderx avatar Apr 25 '22 19:04 riderx

Hello! Apologies for the late reply,

I really like the idea of a signed URL for upload, I will add this to the backlog for discovery & prioritization

fenos avatar Aug 25 '22 15:08 fenos

@fenos thanks for that, for me, i don't need anymore the feature since.

I was able to do APIKEY check with RLS.

If you want to do it too:

First create key_mode, the type of api key:

CREATE TYPE "public"."key_mode" AS ENUM (
    'read',
    'write',
    'all',
    'upload'
);

Then create the table:

CREATE TABLE "public"."apikeys" (
    "id" bigint NOT NULL,
    "created_at" timestamp with time zone DEFAULT "now"(),
    "user_id" "uuid" NOT NULL,
    "key" character varying NOT NULL,
    "mode" "public"."key_mode" NOT NULL,
    "updated_at" timestamp with time zone DEFAULT "now"()
);

Then create the postgress function:

CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
 RETURNS boolean
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
Begin
  RETURN (SELECT EXISTS (SELECT 1
  FROM apikeys
  WHERE key=apikey
  AND mode=ANY(keymode)));
End;  
$function$

Then add the RLS in table you want to give access:

is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])

And in the SDK 1 you can add your APIKEY like that

const supabase = createClient(hostSupa, supaAnon, {
    headers: {
        apikey: apikey,
    }
})

In SDK v2

const supabase = createClient(hostSupa, supaAnon, {
    global: {
      headers: {
          apikey: apikey,
      }
  }
})

riderx avatar Aug 26 '22 13:08 riderx

That would be very much appreciated. Thank you.

kfields avatar Aug 28 '22 22:08 kfields

+1 for this, signed upload URLs would solve a lot of my own implementation issues around using Supabase storage with NextJS

n-glaz avatar Sep 07 '22 14:09 n-glaz

➕ 💯 This would great

th-m avatar Sep 29 '22 15:09 th-m

+1 would really like this

chitalian avatar Nov 03 '22 07:11 chitalian

i updated my comment for people who wanted the apikey system as me

riderx avatar Nov 04 '22 07:11 riderx

+1

413n avatar Nov 15 '22 00:11 413n

+1

c3z avatar Dec 19 '22 09:12 c3z

+1

huntedman avatar Jan 11 '23 18:01 huntedman

Is this still prioritized? The DB is setup in a way where we can still use middleware to handle the auth, but that is not the case for storage uploading. If we aren't able to create a signed URL, we have to use RLS to control the upload authorization which doesn't work in all of our cases. This would be extremely useful in allowing us to have some access-control live in middleware for file uploads.

yoont4 avatar Jan 18 '23 23:01 yoont4

I'm also interested in this feature. I would love to create presigned URLs for uploads to save bandwidth and avoid file size limitations, while using our own server for most of the business logic. It looks like @etiennedupont has fixed their issue by using S3 directly, unfortunately.

ccssmnn avatar Mar 03 '23 11:03 ccssmnn

I can share my solution, where I deployed proxy server using fly.io to circumvent that issue Hovever not ideal I;m still waiting also for this feat

c3z avatar Mar 03 '23 17:03 c3z

@fenos thanks for that, for me, i don't need anymore the feature since.

I was able to do APIKEY check with RLS.

If you want to do it too:

First create key_mode, the type of api key:

CREATE TYPE "public"."key_mode" AS ENUM (
    'read',
    'write',
    'all',
    'upload'
);

Then create the table:

CREATE TABLE "public"."apikeys" (
    "id" bigint NOT NULL,
    "created_at" timestamp with time zone DEFAULT "now"(),
    "user_id" "uuid" NOT NULL,
    "key" character varying NOT NULL,
    "mode" "public"."key_mode" NOT NULL,
    "updated_at" timestamp with time zone DEFAULT "now"()
);

Then create the postgress function:

CREATE OR REPLACE FUNCTION public.is_allowed_apikey(apikey text, keymode key_mode[])
 RETURNS boolean
 LANGUAGE plpgsql
 SECURITY DEFINER
AS $function$
Begin
  RETURN (SELECT EXISTS (SELECT 1
  FROM apikeys
  WHERE key=apikey
  AND mode=ANY(keymode)));
End;  
$function$

Then add the RLS in table you want to give access:

is_allowed_apikey(((current_setting('request.headers'::text, true))::json ->> 'apikey'::text), '{all,write}'::key_mode[])

And in the SDK 1 you can add your APIKEY like that

const supabase = createClient(hostSupa, supaAnon, {
    headers: {
        apikey: apikey,
    }
})

In SDK v2

const supabase = createClient(hostSupa, supaAnon, {
    global: {
      headers: {
          apikey: apikey,
      }
  }
})

Anyone else having trouble with the custom headers? Tried logging the request headers and my custom headers are never attached.

Eerkz avatar Aug 07 '23 07:08 Eerkz