SpringAll
SpringAll copied to clipboard
Added support for 3rd party s3-compatible object storage
In regards to issue #7243, I was able to modify the carrierwave settings to enable my pod to use minio as an s3 backend. For some reason using the 'bucketname.example.com' form of uploading was throwing an SSLv3 error, so I had to enable :path_style = true
to get around this, which uses 'example.com/bucketname' instead.
Does the path_style
also affect/change the current s3 behavior? Should we also create an option for that in the diaspora.yml?
AFAIK it should still work, it just accesses the bucket by an alternate method. Someone with an s3 account should probably verify though.
I appears that AssetSync isn't so easy to override the defaults on, so while this patch makes uploads work, assets still try to go to s3.
What I meant with my comment above was: Is the path_style
only used for upload? Or does that also change the URL how the files are accessed by the users?
Just uploads, I think. Doesn't D* get the url path from configuration > assets > host?
I think environment.assets.host
is only for assets. What I meant was, does path_style
affect the URL generated for uploaded photos when using S3?
Also, if this PR doesn't work with AssetSync, we should add a comment somewhere, that you couldn't enable asset upload if you change the S3 host. And probably also add a check that it only enables when the host
is still default.
Yes, I believe it does change the urls from 'bucket.example.com' to 'example.com/bucket', but AFAIK Amazon supports both styles. Would you like me to add comments and resubmit, or do you want that done on your end?
Yes, I believe it does change the urls from 'bucket.example.com' to 'example.com/bucket', but AFAIK Amazon supports both styles.
I think we should expose this option in the config then and use the previous behavior as default.
Would you like me to add comments and resubmit, or do you want that done on your end?
Please add a comment and also add a check that blocks if someone tries to enable asset upload with a changed S3 host. You can also print a warning then.
Ok. What's the best way to deal with that? Throw an exception?
I think so, because it wouldn't work anyway.
I'm researching this topic to potentially pick it up. Looking at the documentation for Fog it seems that the credentials is highly system dependent. I'm surprised the Minio thing worked but perhaps since it is S3-compatible it's just taking the same access key/secret/region pairs and "just works". In the Fogs library the logins for Digital Ocean and GCS are different, and in the CarrierWave source code it actually shows the different credentials for GCS and Rackspace. We'll have to look at this more carefully.
I added these comments to the Discourse forum as well but want to capture here:
- I have this working with Digital Ocean with upload and download
- The path_style as used in the file either isn't set correctly or doesn't change anything. I need to investigate more
- Big one: the full URL is being encoded in the table so that it doesn't actually use an image redirect to the image hosts but the original upload string. This causes a problem, at least for the Digital Ocean case, because the direct link can only take 200 requests per second before throttling starts. GETS should be using the CDN endpoint for situations like that. The redirect however isn't triggering because the full URL is in the remote location rather than /uploads/images. Should we be storing the full URL if we know we have an image redirect? I'm thinking we should add logic to the Photo.update_remote_path. That provides the added advantage that a podmin can migrate their image hosting platforms without having to do some major database manipulation. I don't want to do this change without discussion however.
@HankG How's the progress on this, are you still planning on getting this work in? :)
Bump on conflicting files