[FLINK-35599] Introduce JDBC pipeline sink connector
This closes FLINK-35599 by implementing the long-awaited JDBC pipeline sink connector, largely based on @kissycn's work in #3433.
Compared to Zhou's original PR, some changes have been made to address @lvyanquan's comments in #3433:
- Supports batch writing for PK tables
-
- Added support for
TruncateTableEventandDropTableEvent
- Added support for
- Added missing SerializationSchemaTest and ITCase
- Simplified code structure, removed a few redundant classes
- Rebased with
masterand resolved conflicts
CI passed in my forked repo. Mark it ready for review...
Why not integrate with flink-cdc-connector?It has implemented pgsql/orcale/mysql.
Thanks for @ruanhang1993's kindly review, addressed comments in latest commits.
Support postgresql? Many OLAP are compatible with postgresql databases. postgresql is preferred
Writing jdbc is an I/O operation, which requires more resources by increasing the flink concurrency. It is recommended that multiple threads be used to write tables in the same task. Supports setting the number of threads。
Thanks for @melin's suggestion! I've refactored JDBC connector structure mimicking flink-connector-jdbc project like this:
- flink-cdc-pipeline-connector-jdbc-parent
- flink-cdc-pipeline-connector-jdbc
- flink-cdc-pipeline-connector-jdbc-core
- flink-cdc-pipeline-connector-jdbc-mysql
- ... (other sinks)
Adding more Jdbc-family sink connectors would be easier.
Thanks for @ruanhang1993's kindly review, comments addressed with latest commits.
Thanks for @ruanhang1993's kindly review, all comments addressed.
When is it scheduled to be released?