Shawn Chang
Shawn Chang
@xicm I agree and I think we should remove precombine field from datasource conf if we don't want to allow users to change precombine fields of their tables
I posted a quick patch to fix the issue, but ideally I think Hudi should remove all write configs that are not allowed to change. Or we can point those...
@mzheng-plaid have you tried `spark.sql("set hoodie.datasource.write.precombine.field=")` in your session?
Hi @ZENOTME @jonathanc-n , was wondering if there are any active work for this issue? I'm planning to look into conflict detecting logic and would be happy to contribute/collaborate on...
I've created https://github.com/apache/iceberg-rust/issues/1344 to add validate logic, which should be a prerequisite of this issue
Thanks for taking this up!
After this change, there should be only test usages of Hadoop classes in `hudi-client-common`
EMR 7.1 uses Java 17 by default but older Java version should still exist there. I think you can try Java 8/11 in this case: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/configuring-java8.html#configuring-java8-override-spark
Yes, I think this can be fixed in two ways: [1] Like mentioned in this thread, we add auto rollback when metasync fails in Hudi [2] Raise a ticket to...
Hi @lukekim , we are still making progress on adopting custom storage layer like object-store, please see - https://github.com/apache/iceberg-rust/issues/1314 - https://github.com/apache/iceberg-rust/pull/1755 - Design doc: https://docs.google.com/document/d/1-CEvRvb52vPTDLnzwJRBx5KLpej7oSlTu_rg0qKEGZ8/edit?tab=t.dgr4vjtmzh92#heading=h.xhzuq2u2mr64