iceberg-rust icon indicating copy to clipboard operation
iceberg-rust copied to clipboard

Tracking issues of iceberg-rust v0.3.0

Open Fokko opened this issue 1 year ago • 9 comments

Iceberg-rust 0.3.0

The main objective of 0.3.0 is to have a working read path (non-exhaustive list :)

  • [x] Scan API Added by @liurenjie1024 in https://github.com/apache/iceberg-rust/pull/129
  • [x] Predicate pushdown into the Parquet reader Worked by @viirya in https://github.com/apache/iceberg-rust/pull/295
  • [x] Parquet projection into Arrow streams Worked on by @viirya in https://github.com/apache/iceberg-rust/pull/245, still some limitations, see PR
  • [x] Manifest pruning on using the field_summary: Skipping data on the highest level by pruning away manifests:
    • [x] Transforms added by @marvinlanhenke in https://github.com/apache/iceberg-rust/pull/309
    • [x] ManifestEvaluator added by @sdd in https://github.com/apache/iceberg-rust/pull/322
      • [x] Implement todo's some of the expressions need to be implemented. Issue in https://github.com/apache/iceberg-rust/issues/350
      • [ ] Tests port the test-suite from Python to Rust
    • [x] Filter in TableScan in flight by @sdd in https://github.com/apache/iceberg-rust/pull/323
  • [x] Skipping manifest-entries within a manifest based on the 102: partition struct
    • [x] Accessors added by @sdd in https://github.com/apache/iceberg-rust/pull/317
    • [x] Projection added by @marvinlanhenke in https://github.com/apache/iceberg-rust/pull/309
    • [x] ExpressionEvaluator Implement the evaluator worked on by @marvinlanhenke: https://github.com/apache/iceberg-rust/issues/358
    • [x] Bind partition-spec schema to the 102: partition struct and evaluates it.
    • [ ] Filter in TableScan
  • [x] Skip data-files using the metrics evaluator
    • [x] InclusiveMetricsEvaluator worked on by @sdd in https://github.com/apache/iceberg-rust/pull/347
    • [x] InclusiveProjection added by @sdd in https://github.com/apache/iceberg-rust/pull/322
      • [x] Refactored in https://github.com/apache/iceberg-rust/pull/360
      • [x] Refactored in https://github.com/apache/iceberg-rust/pull/362
    • [ ] Filter in TableScan
  • [x] Datafusion Integration with Apache Datafusion to add SQL support: https://github.com/apache/iceberg-rust/issues/357
    • [x] Initial groundwork in https://github.com/apache/iceberg-rust/pull/324.
  • [ ] Runtime
    • [ ] Parallel loading https://github.com/apache/iceberg-rust/issues/124

Blocking issues:

  • [ ] Field-IDs related:
    • [x] https://github.com/apache/iceberg-rust/issues/338
    • [ ] https://github.com/apache/iceberg-rust/issues/131
    • [ ] https://github.com/apache/iceberg-rust/issues/353
  • [ ] https://github.com/apache/iceberg-rust/issues/352

Nice to have (related to the query plan optimizations above):

  • [ ] Implement skipping based on sequence number skip DELETE manifests that contain unrelated delete files.
  • [ ] Add support for more fileio (https://github.com/apache/iceberg-rust/issues/408)

State of catalog integration:

  • [ ] Catalog support
    • [x] REST Catalog: First stab by @liurenjie1024 in https://github.com/apache/iceberg-rust/pull/78
      • [ ] @Fokko will follow up with IT tests
    • [x] Glue Catalog Added by @marvinlanhenke in:
      • [x] https://github.com/apache/iceberg-rust/pull/294
      • [x] https://github.com/apache/iceberg-rust/pull/304
      • [x] https://github.com/apache/iceberg-rust/pull/314
    • [ ] SQL Catalog Worked on by @JanKaul in https://github.com/apache/iceberg-rust/pull/229
    • [x] Hive Catalog Added by @Xuanwo.
      • [ ] Do we want similar IT tests as in PyIceberg?

For the release after that, I think the commit path is going to be important.

Iceberg-rust 0.4.0 and beyond

Nice to have for the 0.3.0 release, but not required. Of course, open for debate.

  • [ ] Support for Positional Deletes Entails matching the deletes to the datafiles based on the statistics.
  • [ ] Support for Equality Deletes Entails putting the delete files in the right order to apply them in the right sequence.

Commit path

The commit path entails writing a new metadata JSON.

  • [ ] Applying updates to the metadata Updating the metadata is important both for writing a new version of the JSON in case of a non-REST catalog, but also to keep an up-to-date version in memory. It is very much recommended to re-use the Updates/Requirement objects provided by the REST catalog protocol.
  • [ ] Update table properties Sets properties on the table. Probably the best to start with since it doesn't require a complicated API.
  • [ ] Schema evolution API to update the schema, and produce new metadata.
  • [ ] Partition spec evolution API to update the partition spec, and produce new metadata.
  • [ ] Sort order evolution API to update the schema, and produce new metadata.

Metadata tables

Metadata tables are used to inspect the table. Having these tables also allows easy implementation of the maintenance procedures since you can easily list all the snapshots, and expire the ones that are older than a certain threshold.

Write support

Most of the work in write support is around generating the correct Iceberg metadata. Some decisions can be made, for example first supporting only FastAppends, and only V2 metadata.

It is common to have multiple snapshots in a single commit to the catalog. For example, an overwrite operation of a partition can be a delete + append operation. This makes the implementation easier since you can separate the problems, and tackle them one by one. Also, for the roadmap it makes it easier since their operations can be developed in parallel.

  • [ ] Commit semantics
    • [ ] MergeAppend appends new manifest list entries to existing manifest files. Reduces the amount of metadata produced, but takes some more time to commit since existing metadata has to be rewritten, and retries are also more costly.
    • [ ] FastAppend Generates a new manifest per commit, which allows fast commits, but generates more metadata in the long run. PR by @ZENOTME in https://github.com/apache/iceberg-rust/pull/349
  • [ ] Snapshot generation manipulation of data within a table is done by appending snapshots to the metadata JSON.
    • [ ] APPEND Only data files were added and no files were removed.
    • [ ] REPLACE Data and delete files were added and removed without changing table data; i.e., compaction, changing the data file format, or relocating data files.
    • [ ] OVERWRITE Data and delete files were added and removed in a logical overwrite operation.
    • [ ] DELETE Data files were removed and their contents logically deleted and/or delete files were added to delete rows.
  • [ ] Add files to add existing Parquet files to a table. Issue in https://github.com/apache/iceberg-rust/issues/345
    • [ ] Name mapping in case the files don't have field-IDs set.
  • [ ] [Summary generations] Part of the snapshot that indicates what's in the snapshot.
  • [ ] Metrics collection There are two situations:
    • [ ] Collect metrics when writing This is done with the Java API where during writing the upper, lower bound are tracked and the number of null- and nan records are counted.
    • [ ] Collect metrics from footer When an existing file is added, the footer of the Parquet file is opened to reconstruct all the metrics needed for Iceberg.
  • [ ] Deletes This mainly relies on strict projection to check if the data files cannot match with the predicate.
    • [ ] Strict projection needs to be added to the transforms.
    • [ ] Strict Metrics Evaluator to determine if the predicate cannot match.

Future topics

  • [ ] Python bindings
  • [ ] WASM to run Iceberg-rust in the browser

Contribute

If you want to contribute to the upcoming milestone, feel free to comment on this issue. If there is anything unclear or missing, feel free to reach out here as well 👍

Fokko avatar Apr 24 '24 05:04 Fokko

@Fokko thanks for your effort here

marvinlanhenke avatar Apr 24 '24 11:04 marvinlanhenke

@marvinlanhenke No problem, thank you for all the work on the project. While compiling this I realized how much work has been done 🚀

Fokko avatar Apr 24 '24 13:04 Fokko

Thanks for putting this together @Fokko! It's great to have this clarity on where we're heading. Let's go! 🙌

sdd avatar Apr 24 '24 21:04 sdd

Hi, @Fokko About the read projection part, currently we can convert parquet files into arrow streams, but there are some limitations: it only support primitive types, and schema evolution is not supported yet. Our discussion is in this issue: https://github.com/apache/iceberg-rust/issues/244 And here is the first step of projection by @viirya : https://github.com/apache/iceberg-rust/pull/245

liurenjie1024 avatar Apr 25 '24 01:04 liurenjie1024

About the glue, hive, rest catalogs, I think we already have integrations: https://github.com/apache/iceberg-rust/blob/2018ffc87625bdff939aac791784d8eabc4eda38/crates/catalog/glue/tests/glue_catalog_test.rs https://github.com/apache/iceberg-rust/blob/ffd76eb41594416b366a17cdbc85112c68c01a17/crates/catalog/hms/tests/hms_catalog_test.rs https://github.com/apache/iceberg-rust/blob/d6703df40b24477d0a5a36939746bb1b36cc6933/crates/catalog/rest/tests/rest_catalog_test.rs

liurenjie1024 avatar Apr 25 '24 02:04 liurenjie1024

Also as we discussed in this doc, do you mind to add datafusion integration, python binding, wasm binding into futures?

liurenjie1024 avatar Apr 25 '24 05:04 liurenjie1024

Hi, @Fokko About the read projection part, currently we can convert parquet files into arrow streams, but there are some limitations: it only support primitive types, and schema evolution is not supported yet. Our discussion is in this issue: https://github.com/apache/iceberg-rust/issues/244 And here is the first step of projection by @viirya : https://github.com/apache/iceberg-rust/pull/245

Thanks for the context, I've just added this to the list.

About the glue, hive, rest catalogs, I think we already have integrations:

Ah yes, I forgot to check those marks, thanks!

Also as we discussed in this doc, do you mind to add datafusion integration, python binding, wasm binding into futures?

Certainly! Great suggestions! I'm less familiar on some of these topics (like Datafusion), feel free to edit the post if you feel something is missing.

Fokko avatar Apr 25 '24 07:04 Fokko

Certainly! Great suggestions! I'm less familiar on some of these topics (like Datafusion), feel free to edit the post if you feel something is missing.

...for Datafusion I have provided a basic design proposal and implementation for some of the datafustion traits, like catalog & schema provider; Perhaps we can also move forward on this: #324

marvinlanhenke avatar Apr 25 '24 07:04 marvinlanhenke

Certainly! Great suggestions! I'm less familiar on some of these topics (like Datafusion), feel free to edit the post if you feel something is missing.

...for Datafusion I have provided a basic design proposal and implementation for some of the datafustion traits, like catalog & schema provider; Perhaps we can also move forward on this: #324

Yeah, I'll take a review later.

liurenjie1024 avatar Apr 25 '24 08:04 liurenjie1024

Hi, most of the issues in our 0.3 milestone have been closed. I plan to clean up the remaining issues and initiate the release process. Any ideas or comments?

Xuanwo avatar Aug 14 '24 04:08 Xuanwo

@Xuanwo Thanks for driving this. It would be good to get everything that we have on main out to the users 👍

Fokko avatar Aug 14 '24 07:08 Fokko

I have created https://github.com/apache/iceberg-rust/issues/543 to track the release process, please let me know if you think anything missed.

Xuanwo avatar Aug 14 '24 07:08 Xuanwo