Andrew Gaul
Andrew Gaul
Lack of support prevents some kinds of testing, e.g., s3fs integration tests with `-o use_sse`. Full support for this requires jclouds work like apache/jclouds#130. Is there some shortcut here instead...
Using [jlink](https://docs.oracle.com/javase/9/tools/jlink.htm) may allow reducing S3Proxy's JDK dependency and make it easier to deploy, e.g., in Docker #397.
AWS requires this: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html Does s3-tests have coverage?
Minio will implement an unpack RPC that S3Proxy could also benefit from: https://twitter.com/abperiasamy/status/1376998079249833986 This is likely inspired by a similar Swift feature. One of the potential use cases is uploading...
This would allow traditional web clients to talk to S3Proxy.
In #352 a user reported an error with this HTTP error: ``` HTTP ERROR: 400 Problem accessing /. Reason: com.google.common.io.BaseEncoding$DecodingException: Unrecognized character: . ``` S3Proxy should return the full stack...
One of the slowest s3fs operations is `readdir` which requires ListObjects followed by a HeadObject for each object. S3Proxy could improve performance over WANs by locating an S3Proxy instance closer...
[JCLOUDS-1554](https://issues.apache.org/jira/browse/JCLOUDS-1554) and Azure API 2019-12-12 will allow up to 5 GB blobs. This allows removing the > 256 MB workaround which should give better support for listing a multipart upload.
S3Proxy could provide a read cache by saving every object during get and issuing conditional gets for validation which most backends support. We would need to provide knobs for cache...
S3 implementations do not implement all of the AWS functionality. For testing it might be useful to disable: * Copy Objects * List Objects V2 * Multipart Uploads I suspect...