dim
dim copied to clipboard
[Feature Request] Handle S3-compatible storage
One thing that's frustrating about all existing open source media management systems is the lack of S3-compatible storage options. I think it's especially ironic given that they all go out of their way to provide a nice Docker container, but then ignore the biggest (or maybe 2nd biggest) problem of the container approach, which is provisioning and accessing persistent, file-based storage.
Dim could really differentiate itself by providing first-class support for S3 interfaces:
- Scan, tag, & organize media on S3-compatible object storage systems.
- Provide the usual HTTP-based streaming APIs to clients for media located on S3-compatible object storage systems.
Wouldn't it make more sense to have some sort of script that can mount a S3 bucket with fuse?
This is usually handled by something like rclone, which FUSE-mounts an S3 bucket, then you point the media server at that FUSE mount.
Agreed on this being one of the very basic thing missing at the moment . i did setup everything up only to realise at the end that my rclone mounted google drive isn't being detected in library creation stage . More than half of the people i know use rclone mounted cloud storages for their mediaservers . I hope this gets added on a more prioritised basis . Looking forward to try it out once fuse mounts are supported
@xd003 When you mount google drive with rclone, does the path simply not show up in the library creation modal? I'd be happy to prioritize this issue if I can get more details on your setup and environment.
@vgarleanu
Since i had tried it some months ago , i thought it would be best to try it again now & see how it goes . Not sure if i had done some mistake back then or something changed in code since then but now i can see my mounted google drive folder in library creation modal . Although it doesn't really seem to work . After i select my mounted directory in the modal and click on add library , the frontend simply stops working for some reason and doesn't load at all like something has crashed it
Docker logs have the following at the bottom
Jan 11 13:55:20.291 INFO GET, mod: warp, route: /api/v1/filebrowser//GD, status: 200 OK, ip: MY_IP:34715, duration: 1, duration_tag: ms
Jan 11 13:55:21.591 INFO GET, mod: warp, route: /api/v1/filebrowser//GD/Movies, status: 200 OK, ip: MY_IP:34715, duration: 1, duration_tag: ms
Jan 11 13:55:30.813 INFO POST, mod: warp, route: /api/v1/library, status: 201 Created, ip: MY_IP:34926, duration: 9, duration_tag: ms
Jan 11 13:55:30.902 INFO Scanning library, mod: scanner, library_id: 1
Ubuntu 20.04 on a VPS Rclone Mounted Google drive
Seems like the scanner is stalling on something, Ill look into it, thanks for reporting.
Seems like the scanner is stalling on something, Ill look into it, thanks for reporting.
Just an update
After some brief amount of time , the frontend has finally loaded . It says no media has been found
If you need anything else , i am happy to share Thanks
The same thing happens to me. The library stays empty.
start_custom{library_id=3 media_type=Movie provider=TMDBMetadataProviderOf<K> { key: Movie }}: dim::scanner: Walked all target directories. elapsed_ms=89 files=287
We don't have to use object storage layer from the cloud provider, we can simply host by ourselves. Object layer or s3-compatiable storage should be the next generation storage for homelab or production env because of:
- Build-in object tagging
- Build-in versioning in object level
- Third-party integrated backup solution
- Build-in ACL
- Redudency (not just in disk level, but also instance level)
- Easy horizontal scalability (not just in disk level, but also instance level)
- Easy size expansion (we don't have to copy files if we want to expand the storage)
- Able to share file in limited time (like via pre-signed url)
- Easy to find library to upload from client side (like via pre-signed url)
If we have truly object storage layer, the docker container can become truly stateless which means we can have multiple k8s pods and spin up and down if we need to among different instances. It is painful to share persistent volume between different pods/projects.... Yes, we can hack it, so I mean painful to watch ;)
Is it overkill? probably, but we can also simply use midnight commander to watch videos too if we are minimalist , right? ;)
Wouldn't it make more sense to have some sort of script that can mount a S3 bucket with fuse?
No.
FUSE is not a replacement for native s3 support. It's a hack.
RIP
Alright so Github sorting by least recently active isn't exactly accurate.
As @dhess points out, one of the main benefits of network-based storage is:
Provide the usual HTTP-based streaming APIs to clients for media located on S3-compatible object storage systems.
That is, a client can pull blocks directly from the upstream source, at least when transcoding is not required. And if transcoding is required, there are services such as AWS Elastic Transcoder that do just this.