storage icon indicating copy to clipboard operation
storage copied to clipboard

Make object upload size limit at bucket level configurable

Open rahul3v opened this issue 2 years ago • 4 comments

Feature request

Make object upload size limit at bucket level configurable (even batter if you extend it to folder level)

Is your feature request related to a problem? Please describe.

It will help one to limit users to uploading a limited file size to a bucket/folder, (as an example profile pics, max needed 1MB) Since frontend file limit check can be bypassed via some tools, so it will be batter to have some checks at backend before upload

Even we have a configurable setting to limit the max file size, that will not help much, It will open up to a user to upload at that max limit, which as per project/product structure might not needed all the time. and the issue here is a user might full the storage bucket with raw data.

Describe the solution you'd like

  • The way you check the file limit as a whole for any file uploads, let that can be configurable at the bucket level.
  • Any extension
  • Any policy check that can help (like with storage.filesize())

A clear and concise description of what you want to happen.

I want, when user upload a file to a bucket, I can check for the file size before uploading into the bucket. at backend side

Let say your limit is 50MB So, I can able to configure my buckets somewhere between (0-50MB) as per my/project need like,

 avtar_bucket  (max_file_limit :1MB)
 project_bucket  (max_file_limit :5MB)
 large_bucket    (max_file_limit :50MB)
  ..........
  ..........
  ..........

that will help me use my storage precisely with my storage volume limits

Describe alternatives you've considered

Otherwise we have to use any server function just for that all the time before uploading file to storage bucket, like doing same thing again one more time, (one for my project buckets limit check running at my server code, and other by your(supabase) limit check running at your code.)

That will ignore the real use of supabase storage api to upload it directly from client side.

rahul3v avatar Apr 09 '22 09:04 rahul3v

@kiwicopple, @thebengeu, @alaister, Any comments...!

rahul3v avatar May 03 '22 11:05 rahul3v

Hey @rahul3v, Really sorry for the delayed response - we've been busy!

We discussed this in our most recent storage meeting, and we think it's a great idea!

I've added it to our internal to-do list but can't give you any timeline on when this might be implemented.

Thanks again, and as always, PRs are welcome :)

alaister avatar May 23 '22 07:05 alaister

@alaister, if possible extend it to folder level not to deep needed, 1-folder-level will be ok or at most 2-folder-level :)

rahul3v avatar May 24 '22 13:05 rahul3v

This would be awesome. Having the max storage size for all buckets set to one size is rough.

rlee1990 avatar Jul 25 '22 01:07 rlee1990

This is now being shipped with https://github.com/supabase/storage-api/pull/277

fenos avatar Mar 06 '23 10:03 fenos

Is this functionality read to use now? If so how to we use it?

jopfre avatar Mar 09 '23 22:03 jopfre

@fenos Related to this. Could we define a max for the bucket itself. (I want to set a maximum for the entire bucket (sum of all files inside it)

KhaledGabr avatar Mar 14 '23 15:03 KhaledGabr

These options should be available when you create or edit a bucket in the dashboard. And in the API, client library too.

image

inian avatar Apr 18 '23 07:04 inian

Hi @inian . I think what @KhaledGabr means, is to set the a limit of the bucket itself, not the individual file size. And I support that idea. It would be really nice, if we were able to set a maximimum bucket size. So for example, I want to give a user a bucket, which can be max 1 GB. I don't care how much files the user is uploading, nor how large each Individual file is, as long as the user can't upload more than 1GB in his bucket.

haexhub avatar Nov 15 '23 05:11 haexhub

Hi @haexhub, that is exactly what the feature does - https://supabase.com/docs/guides/storage/buckets/creating-buckets#restricting-uploads

inian avatar Nov 15 '23 06:11 inian

Hi @inian, I am looking for a maxBucketSize flag as opposed to setting a limit for each file.

const { data, error } = await supabase.storage.createBucket('avatars', {
  public: true,
  allowedMimeTypes: ['image/*'],
  maxFileSize: '1MB',     <--  max size for each file in the bucket
  maxBucketSize:'1GB'   <-- max size of the entire bucket. 
})

So, in the example above, if the user uploaded a 1000 images each 1MB .. it would consume the limit.

KhaledGabr avatar Nov 15 '23 06:11 KhaledGabr

Ah you want a limit on the total size of the files in the bucket. What is your use case for this feature?

inian avatar Nov 15 '23 06:11 inian

@inian I want to set upper limit primarily to control cost. Each user in my system gets a bucket and I should have some restrictions on how much they upload, and be able to change bucket size based on whether they are paid users or no. Having no restrictions can have significant cost risk.

This potentially could be solved with RLS, the only issue with that I will need to let the user know that they exceeded the limit in an error message, which is not possible with RLS.

KhaledGabr avatar Nov 15 '23 07:11 KhaledGabr

Exactly. Its not just the cost itself. Its also just the limit of available space. We always have a limit amount of disk space and I want to be able, to share that evenly. I want to give each user a bucket of a specific size, so I'm able, to divide the available space evenly and not having one user consuming all available space.

haexhub avatar Nov 15 '23 08:11 haexhub

Since we can use SQL to operate on storage buckets, one can create insert/delete object triggers that will aggregate the total size of objects in a bucket. Then, if necessary, the maxFileSize can be set to min(maxFileSize, maxBucketSize - totalFileSize). This will prevent users from uploading files exceeding the max bucket size. I didn't implement this logic myself, but it sounds reasonable :-)

li4man0v avatar Feb 19 '24 13:02 li4man0v