Provide differert read interface for reader
Is your feature request related to a problem or challenge?
For now, our arrow reader accepts the FileScanTask and returns the RecordBatchStream to the user. After #630, the reader can process the delete file and merge it with the data file, which it's good to ready to use out of the box. However, for some compute engines, they hope to process delete file by themselves so that they can utilize the existing join executor and storage to spill the data. This require to read the delete file directly rather than process the delete file internally.
Based on this, I suggest providing different read interface so that it satisfy different requirement:
- read: process data and delete file of FileScanTask internally
- read_data: read data file of FileScanTask internally
- read_pos_delete: read position delete file of FileScanTask and return result directly
- read_eq_delete: read equality delete file of FileScanTask and return result directly
Describe the solution you'd like
No response
Willingness to contribute
- [ ] I can contribute to this feature independently
- [ ] I would be willing to contribute to this feature with guidance from the Iceberg Rust community
- [ ] I cannot contribute to this feature at this time
How do you think? cc @liurenjie1024 @Xuanwo @Fokko @sdd
Hi, I believe that's related to https://github.com/apache/iceberg-rust/issues/1036
Seems like a reasonable idea to me. If my 5 open PRs for delete file read support get reviewed and merged then implementing what you need would be pretty trivial on top of them :-)
Thanks @ZENOTME for raising this. I think what's missing is a FileReader which accepts following arguements:
- File path
- File range
- Expected schema
- Arrow batch size
This reader need to convert files(parquet, orc, avro) into arrow record batch, which handles things like missing column, type promotion, etc, which are caused by schema evolution.
With this api, it would be easy to implement the read_data, read_pos_delete, read_eq_delete you mentioned. But I'm not sure if we acutally need to provided these apis. I think the FileReader + FileScanTask has provided enough flexibility for compute engines. For example, it can choose to join data file with pos deletions and eq deletions in logical plan, or they could choose to implement their own file scan operator.
Thanks @ZENOTME for raising this. I think what's missing is a
FileReaderwhich accepts following arguements:
- File path
- File range
- Expected schema
- Arrow batch size
This reader need to convert files(parquet, orc, avro) into arrow record batch, which handles things like missing column, type promotion, etc, which are caused by schema evolution.
With this api, it would be easy to implement the
read_data,read_pos_delete,read_eq_deleteyou mentioned. But I'm not sure if we acutally need to provided these apis. I think theFileReader+FileScanTaskhas provided enough flexibility for compute engines. For example, it can choose to join data file with pos deletions and eq deletions in logical plan, or they could choose to implement their own file scan operator.
In this design, does ArrowReader reuse FileReader?
- If so, I think we may need to refactor some logic of
ArrowReader - Otherwise,
FileReaderis an independent component and it may be more convenient to maintain.
And for delete file(pos delete, equality delete), do we need to handle things like missing column, type promotion? 🤔 Seems for pos delete and eq delete without value, we can't fulfill the value if they miss. So in here we may need the read_data, read_pos_delete, read_eq_delete to separate the handle way.
This issue has been automatically marked as stale because it has been open for 180 days with no activity. It will be closed in next 14 days if no further activity occurs. To permanently prevent this issue from being considered stale, add the label 'not-stale', but commenting on the issue is preferred when possible.
This issue has been closed because it has not received any activity in the last 14 days since being marked as 'stale'