django-filebrowser
django-filebrowser copied to clipboard
Compatibility with S3
This is a container for all tickets related to S3 storage. Please note that we currently do not support S3 with the filebrowser, but we do appreciate hints on how to make this work.
Is this being worked on currently? I was going to take a stab at it.
If it's not being worked on, can you summarize what remains? Is it just those two issues? Is #40 complete?
IMO all tickets are pointing to the same issues, but none of them is resolved. We do not work on S3 compatibility, but we are happy to implement well-tested pull requests.
@johndevor please note that stable/3.6.x is ahead of master (until the next major release). so – if you wanna add a PR, please use that branch.
Hello :D Is this being worked on ?
@emilpriver we don't actively work on this issue. see my previous comments.
It would be appreciated if you could add a (prominent?) note somewhere on this repo's README to reflect that there's no S3 support. I just upgraded my local app to use Grappelli, then installed django-filebrowser
, then saw 'S3Storage' object has no attribute 'isdir'
and ended up on this issue. I have no expectation on you to make S3 integration a feature, but given how popular it is, it would save future developers time if you could link to this issue in the readme. Thanks!
im waiting for this tooooo
I happened to be joining django-filebrowser with django-storages for s3, and figured I'd put in an afternoon of work getting it going. I have it working, not sure if it's good enough for a PR yet as I feel it's a bit hacky but I'll post some relevant code below.
My Versions are
Django==5.0.3
django-filebrowser-no-grappelli==4.0.2 (Should work the the same with the original grapelli version)
django-storages==1.14.2
I have this in a storages.py
within my app, then I just import this as my storage backend.
from storages.backends.s3 import (
S3Storage as OriginalS3Storage,
S3File as OriginalS3File,
)
from storages.utils import clean_name
from filebrowser.storage import StorageMixin
class S3BotoStorageMixin(StorageMixin):
def isfile(self, name):
return self.exists(name)
def isdir(self, name):
# That's some inefficient implementation...
# If there are some files having 'name' as their prefix, then
# the name is considered to be a directory
if not name: # Empty name is a directory
return True
if self.isfile(name):
return False
name = self._normalize_name(clean_name(name)).strip("/")
split_path = name.split("/")
parent = "/".join(split_path[:-1])
tail = split_path[-1]
directories, files = self.listdir(parent)
# Check whether the iterator is empty
for directory in directories:
if tail == directory:
return True
return False
def move(self, old_file_name, new_file_name, allow_overwrite=False):
if self.exists(new_file_name):
if allow_overwrite:
self.delete(new_file_name)
else:
raise "The destination file '%s' exists and allow_overwrite is False" % new_file_name
old_key_name = self._normalize_name(clean_name(old_file_name))
new_key_name = self._normalize_name(clean_name(new_file_name))
k = self.bucket.copy_key(new_key_name, self.bucket.name, old_key_name)
if not k:
raise "Couldn't copy '%s' to '%s'" % (old_file_name, new_file_name)
self.delete(old_file_name)
def makedirs(self, name):
pass
def rmtree(self, name):
name = self._normalize_name(clean_name(name))
directories, files = self.listdir(name)
for item in files:
self.delete(item)
for item in directories:
self.delete(item)
def setpermission(self, name):
# Permissions for S3 uploads with django-storages
# is set in settings.py with AWS_DEFAULT_ACL.
# More info: http://django-common-configs.readthedocs.org/en/latest/configs/storage.html
pass
from botocore.exceptions import ClientError
from datetime import datetime
class S3File(OriginalS3File):
def get_modified_time(self, name):
modified_time = OriginalS3File.get_modified_time(self, name)
if not modified_time:
return datetime.now()
return modified_time
class S3Storage(S3BotoStorageMixin, OriginalS3Storage):
def _open(self, name, mode="rb"):
name = self._normalize_name(clean_name(name))
try:
f = S3File(name, mode, self)
except ClientError as err:
if err.response["ResponseMetadata"]["HTTPStatusCode"] == 404:
raise FileNotFoundError("File does not exist: %s" % name)
raise # Let it bubble up if it was some other error
return f
def path(self, name):
return self.get_available_name(name)
In settings.py
FILEBROWSER_DEFAULT_PERMISSIONS = None
FILEBROWSER_DEFAULT_SORTING_BY = "name"
Some Notes:
- I haven't tested
rmtree
, I may test it later but it's not something I really need. -
S3File.get_modified_time
is a hacky answer to any attempt at sorting by date (the default sort attribute) when sorting the folders, as S3 'folders' don't have dates. FileBrowser throws an exception with Null values due as the '<' doesn't work with two Nones. - I had overridden
S3Storage._open
just to get the customS3File
object loaded instead. -
S3Storage.path
is a hacky solution to fix the thumbnail generation, I'm sure this may cause issues down the line, but for now seems to be okay. Anything that does usestorage.path
to open a file will throw an error. FileBrowser uses it when comparing thumbnail versions, and compares it withget_available_name
, by returning that value the comparison won't throw a 'NotImplementedError' and won't trigger the version delete. Not sure if that will cause issues, in my use case, I'm not likely to have new versions of the same file, in the rare case I do the thumbnail probably won't matter. -
FILEBROWSER_DEFAULT_PERMISSIONS
must be None, otherwise an error will be thrown as it attempts to set the permissions of a non-existing object (see note onS3Storage.path
). Since it's an S3 Bucket, it won't matter. -
FILEBROWSER_DEFAULT_SORTING_BY
is not entirely necessary with thatS3File.get_modified_time
hack, but it's probably more useful.
Hopefully this helps some people until a PR is made. I'm not likely to make one soon.
I have done some testing, but probably not all the cases, I'll update the post when I find a bug in the above code.