Zero-copy converting for a location with many parquet files to fuse engine table
- Load data in background: Users query as normal but copy data to databend cloud at the same time. Once load are ready, users can query in a more efficient way.
There is no COPY here, we can transform the parquet files to fuse engine files directly, for example:
Users can create a table:
CREATE table xx ... location='s3://<user-bucket-path>' CONNECTION=...
If the location is parquet files and not created by fuse engine, we can query them in normal way:
- list all the parquet files
- query them without any optimization (Since it does not have fuse indexes)
If the user does some optimization like:
optimize table xx; -- this statement syntax is a demo
We can:
- create min/max and other all fuse indexes for the parquet files without loading them
- convert all parquet files as the fuse engine files, and store some metadata to metasrv
I think @dantengsky have some ideas on it.
Originally posted by @BohuTANG in https://github.com/datafuselabs/databend/issues/7211#issuecomment-1229847434
Note: This task should wait until https://github.com/datafuselabs/databend/issues/7211 is finished.
It would be awesome to support generate/load indices from fuse and actual data is stored under remote storage. This will give very good power for most of the anaytics solutions. The data changes in analytics is rather slow as compared to OLTP system. So even if we compute the indices at a periodic interval that would be a tremendous improvement.
That's absolutely right, thank you!