Offload listing files and inferring stats workload to Executors for large tables(>10K+ files)
Is your feature request related to a problem or challenge? Please describe what you are trying to do. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] (This section helps Arrow developers understand the context and why for this feature, in addition to the what)
Listing files and inferring stats will become a bottleneck for large tables. Need to offload the work to Executors. Ballista need to extend the current ListingTable implementations.
Describe the solution you'd like A clear and concise description of what you want to happen.
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here.
@andygrove @yahoNanJing @Dandandan Please share your thoughts on this.
Presto solves this issue by decoupling task scheduling and listing, which lists partitions asynchroously and assign them to source tasks.
I think async listing/planning tasks feels like a good solution. Listing implementations already support returning 1000 files in one call, this should be enough to utilize a cluster (with 1000 tasks) before the next page arrives.
Any real inspection of data (gathering parquet metadata / stats) should preferably move to executors for larger tables.
For partitioned data it is also possible to parallelize the listing (for each partition) if all partition values are known.
Additionally, formats like delta / iceberg avoid this problem by already having the files/metadata/stats available in the format.