datafusion-objectstore-s3
datafusion-objectstore-s3 copied to clipboard
Investigate using S3 Select
It seems support was added for this based on https://github.com/awslabs/aws-sdk-rust/releases/tag/v0.0.17-alpha
Look into integrating this into S3FileSystem or using it to create a TableProvider.
What are you trying to achieve? it looks like SELECT only queries JSON structures?
It was raised on slack (https://the-asf.slack.com/archives/C01QUFS30TD/p1645989728729579?thread_ts=1645245240.528129&cid=C01QUFS30TD), i dont have any particular insight at this stage. Just created this to log the request and will look into later when i have some more time.
If i recall correctly, S3 Select worked on CSV, JSON, and Parquet. But I read about it a while ago so dont hold me to that. Doing zero research i thought maybe we could add something like a select method to S3FileSystem.
Honestly though I havent used before or had time to look into this so ill just come back or see if someone else (maybe the person who raised it) looks into it.
It was raised on slack
Hi, I raises this up as an idea only
it looks like SELECT only queries JSON structures?
As of 2022-02, from source
- For input, Amazon S3 Select works on objects stored in CSV, JSON, or Apache Parquet format
- there are also other limitation on Parquet, e.g. only columnar compression using GZIP or Snappy. Amazon S3 Select doesn't support whole-object compression
- For output, Amazon S3 Select only support CSV or JSON.
What are you trying to achieve?
S3 select supports aggregation pushdown and predicate pushdown, it could improve performance based on use cases. e.g. Using S3 Select Pushdown with Presto to improve performance
I am now looking into this. Let me share my investigation and opinion.
S3 Select itself
-
Presto and Ceph only support CSV S3 Select. There are several reasons:
- Parquet has column metadata, and we are already doing predicate pushdown with them.
- As for JSON, the odd type
MISSINGexists and breaks predicate pushdown consistency. Let's assume the following data. On S3 Select, missing fields are treated asMISSING. In this case, the second row'scisMISSING. The result set ofSELECT * FROM s WHERE c IS NULLis empty because, unlikeUNKNOWN,MISSINGis not the same asNULL.{"a": "foo", "b": 1, "c": "aaa"} {"a": "bar", "b": 3}
-
We can do the parallel scan to one text file by using
ScanRange.It enables us to accelerate the large file reading. Please note that
ScanRangedoes not support the compressed text data.
How to achieve the S3 Select acceleration
As per the previous two examples, we should integrate S3 Select into the CSV scan. Since we need to pass predicates and build the SQL query from them, I believe it's not an ObjectStore matter.
Actually, I did the implementation as a physical_plan. It can be switched over by its URL scheme. While I already wrote the integration tests, I am not fully sure this is the best way we can get.
@Licht-T Hi thanks for raising this. This repo will be archived soon. There is now object_store which is preferred. I recommend raising this request there.