Prashant Singh
Prashant Singh
> For (2): We have not discussed incremental refresh plans in the Iceberg community, but [there is some relevant work here](https://www.slideshare.net/walaa_eldin_moustafa/incremental-view-maintenance-with-coral-dbt-and-iceberg). You can review some of the test cases [here](https://github.com/linkedin/coral/blob/0d5dd3f300f48e48cd2404a49dbb799d7f4ce190/coral-incremental/src/test/java/com/linkedin/coral/incremental/RelToIncrementalSqlConverterTest.java#L28)....
can this fit for your use case : https://github.com/apache/iceberg/pull/9818/files
Breaking this change in to 3 logical changes, have all 3 working locally, will send these as we go ! - [X] Request / Response Models, Parsers - [ ]...
seems un-related failure ``` TestRewriteDataFilesAction > testParallelPartialProgressWithMaxFailedCommitsLargerThanTotalFileGroup() > formatVersion = 2 FAILED java.lang.RuntimeException: partial-progress.enabled is true but 1 rewrite commits failed. This is more than the maximum allowed failures of...
> We first find matching files and then plan splits so the split size can be dynamic, we just need a good way to estimate it correctly. +1 on @aokolnychyi's...
IMHO it should be due to `RowLevelCommandDynamicPruning` is not supported in MOR : https://github.com/apache/iceberg/blob/50ca63bde82547c42475591455c00a429c854d4b/spark/v3.4/spark-extensions/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelCommandDynamicPruning.scala#L66-L68
should we include the snapshot id in the cache key to mitigate this situation ?
@Tishj I don't think the expectation is we can read Trino or Spark sql, but what think is important is duck-db can store its own dialect in SQL text, and...
@Tishj any thoughts ?
Thanks @gaborkaszab i am not sure what is causing the flakyness as my CI green when we merged, i will take a deeper look and put a fix ASAP.