datafusion-objectstore-s3 icon indicating copy to clipboard operation
datafusion-objectstore-s3 copied to clipboard

Investigate using S3 Select

Open matthewmturner opened this issue 3 years ago • 6 comments

It seems support was added for this based on https://github.com/awslabs/aws-sdk-rust/releases/tag/v0.0.17-alpha

Look into integrating this into S3FileSystem or using it to create a TableProvider.

matthewmturner avatar Feb 28 '22 00:02 matthewmturner

What are you trying to achieve? it looks like SELECT only queries JSON structures?

seddonm1 avatar Feb 28 '22 01:02 seddonm1

It was raised on slack (https://the-asf.slack.com/archives/C01QUFS30TD/p1645989728729579?thread_ts=1645245240.528129&cid=C01QUFS30TD), i dont have any particular insight at this stage. Just created this to log the request and will look into later when i have some more time.

matthewmturner avatar Feb 28 '22 01:02 matthewmturner

If i recall correctly, S3 Select worked on CSV, JSON, and Parquet. But I read about it a while ago so dont hold me to that. Doing zero research i thought maybe we could add something like a select method to S3FileSystem.

Honestly though I havent used before or had time to look into this so ill just come back or see if someone else (maybe the person who raised it) looks into it.

matthewmturner avatar Feb 28 '22 02:02 matthewmturner

It was raised on slack

Hi, I raises this up as an idea only

it looks like SELECT only queries JSON structures?

As of 2022-02, from source

  1. For input, Amazon S3 Select works on objects stored in CSV, JSON, or Apache Parquet format
    • there are also other limitation on Parquet, e.g. only columnar compression using GZIP or Snappy. Amazon S3 Select doesn't support whole-object compression
  2. For output, Amazon S3 Select only support CSV or JSON.

What are you trying to achieve?

S3 select supports aggregation pushdown and predicate pushdown, it could improve performance based on use cases. e.g. Using S3 Select Pushdown with Presto to improve performance

jychen7 avatar Mar 01 '22 02:03 jychen7

I am now looking into this. Let me share my investigation and opinion.

S3 Select itself

  • Presto and Ceph only support CSV S3 Select. There are several reasons:

    • Parquet has column metadata, and we are already doing predicate pushdown with them.
    • As for JSON, the odd type MISSING exists and breaks predicate pushdown consistency. Let's assume the following data. On S3 Select, missing fields are treated as MISSING. In this case, the second row's c is MISSING. The result set of SELECT * FROM s WHERE c IS NULL is empty because, unlike UNKNOWN, MISSING is not the same as NULL.
      {"a": "foo", "b": 1, "c": "aaa"}
      {"a": "bar", "b": 3}
      
  • We can do the parallel scan to one text file by using ScanRange.

    It enables us to accelerate the large file reading. Please note that ScanRange does not support the compressed text data.

How to achieve the S3 Select acceleration

As per the previous two examples, we should integrate S3 Select into the CSV scan. Since we need to pass predicates and build the SQL query from them, I believe it's not an ObjectStore matter.

Actually, I did the implementation as a physical_plan. It can be switched over by its URL scheme. While I already wrote the integration tests, I am not fully sure this is the best way we can get.

Licht-T avatar Oct 16 '22 09:10 Licht-T

@Licht-T Hi thanks for raising this. This repo will be archived soon. There is now object_store which is preferred. I recommend raising this request there.

matthewmturner avatar Oct 17 '22 13:10 matthewmturner