flink-cdc
flink-cdc copied to clipboard
Flink CDC is a streaming data integration tool
Upgrade and fix the inaccurate and unbalanced judgment of oracle-chunk using rowid sharding
This commit introduces comprehensive end-to-end tests for the MySQL to Elasticsearch data pipeline. The new tests cover multiple versions of Flink and Elasticsearch to ensure broad compatibility.
This change ensures that the log message is clear and free of typographical errors.
Currently, the startIndex of flink-cdc.substr starts from 0, while the startIndex of flink.substr starts from 1, and the startIndex of flink-cdc.substr cannot be less than 0. This is inconsistent with...
This closes FLINK-35762. As previously discussed offline with @lvyanquan, HashMap lookup could be slow since we must calculate key hash code every time, though most key objects are immutable. A...
This closes FLINK-35980 by: * added test cases for pipeline transform features * corrected usage of `Cache` to avoid ClassLoaderLeakage failure Some cases are ignored for now, until ~~FLINK-35981~~, FLINK-35982,...
`numRecordsOutBySnapshot` metrics collects snapshot statistics `numRecordsOutByIncremental` metrics collects incremental statistics `numRecordsOutByIncremental` has three metrics, respectively is `numRecordsOutByIncrementalInsert`, `numRecordsOutByIncrementalUpdate`, `numRecordsOutByIncrementalDelete`  2024.7.12 change metrics name,change `numRecordsOutByIncrementalInsert` to `numRecordsOutByDataChangeEventInsert` 
Currently we don‘t support sync comments of table or column, so the pipeline sink cannot obtain them. Users must get the comments from the source, it's a bad experience.
[FLINK-35968][cdc-connector] Remove dependency of flink-cdc-runtime from flink-cdc-source-connector
Current, flink-cdc-source-connectors depends on flink-cdc-runtime, which is not ideal for design and is redundant. This issue is aimed to remove it.