Danny Chan
Danny Chan
Paimon does not do it, it just detect the schema when the first time it is started.
cc @linliu-code , maybe he could give some investigations of spark inc read.
@jonvex Can you take a look at this issue?
So cc @jonvex , you mean the Spark SQL still fails for the latest master code and things are not like what it mentioned in #2657 because it said that...
We should not use Hive catalog, that's why we introduce a `HoodieHiveCatalog` where we do many tasks for `createTable`.
Probably, can you show the table paramerers read from Hudi hive catalog for the problematic ro table?
I don't think so, it is just a code refactoring right?
Sure, my fault, this is a critical fix for flink streaming read, let's cherry pick it into the branch-0.x.
Can you also update our site to remove the key requirements: https://hudi.apache.org/docs/flink-quick-start-guide#streaming-query?
Can you elaborate what is the purpose of this change?