FANNG
FANNG
> There are known limitations with Hive Parquet reader implementation, you may want to try enabling `spark.sql.parquet.writeLegacyFormat` when generating TPC-DS using Spark. Yes, it works after setting `spark.sql.parquet.writeLegacyFormat` to `true`....
I'm working on this, and will propose a PR to support all exprs
Thanks for the PR, could you add a related document? And besides Mysql, does Flink JDBC connector support other RDMS?
@hdygxsj could you summarize the process logic about the jdbc URL properties when loading catalog and tables? I'm confused by the logic.
@hdygxsj LGTM except minor comments, there are some other works like adding document and add integration test for PG, would you like to create issue to track it or add...
@hdygxsj merged to main, thanks for your work
> Great! I think we can work on this way. WDYT? @jerryshao @FANNG1 I think it's ok, because this method seems extensible and not only works for filter Iceberg tables.
> Generally LGTM, @FANNG1 , do you have any further comments? It's not proper to distribute AKSK in `trino.bypass`, is there any other way?
For loadTable operations , we may need `METALAKE::SELECT_TABLE ||CATALOG::SELECT_TABLE || SCHEMA::SELECT_TABLE || TABLE::SELECT_TABLE || METALAKE::CREATE_TABLE ||CATALOG::CREATE_TABLE || SCHEMA::CREATE_TABLE || TABLE::CREATE_TABLE || METALAKE::CREATE_METALAKE || CATALOG:: CREATE_CATALOG || SCHEMA::CREATE_SCHEMA && CATALOG::USE_CATALOG &&...
@yuqi1129 , do you have time to review this PR?