Hongshun Wang
Hongshun Wang
Current , when sink is not instanceof TwoPhaseCommittingSink, use input.transform rather than stream. It means that pre-write topology will be ignored. ``` private void sinkTo( DataStream input, Sink sink, String...
Current org.apache.flink.cdc.composer.flink.FlinkEnvironmentUtils#addJar will be invoked for each source and sink. ```java public static void addJar(StreamExecutionEnvironment env, URL jarUrl) { try { Class envClass = StreamExecutionEnvironment.class; Field field = envClass.getDeclaredField("configuration"); field.setAccessible(true);...
Current, subclasses of JdbcSourceChunkSplitter almost share same code, but each have one copy. It's hard for later maintenance. Thus, this Jira aim to move same code from multiple subclasses to...
*More detailed description of your change, As shown in https://issues.apache.org/jira/browse/KAFKA-17025, Producer throws uncaught exception in the io-thread, rather than just log then ignore it. ### Committer Checklist (excluded from commit...
Current, inJdbcConnectionPools is static instance, so the datasource pools in it won't be recycle when reader close. It will cause memory leak. ```java public class JdbcConnectionPools implements ConnectionPools { private...
CDC yaml without pipeline should not throw exception.
As shown in https://issues.apache.org/jira/browse/FLINK-34688 : In Mysql CDC, MysqlSnapshotSplitAssigner splits snapshot chunks asynchronously(https://github.com/apache/flink-cdc/pull/931). But CDC framework lacks it. If table is too big to split, the enumerator will be stuck,...
[FLINK-35968][cdc-connector] Remove dependency of flink-cdc-runtime from flink-cdc-source-connector
Current, flink-cdc-source-connectors depends on flink-cdc-runtime, which is not ideal for design and is redundant. This issue is aimed to remove it.
As discribe in https://issues.redhat.com/browse/DBZ-8003 , postgres Connector throw exception from keepAliveExecutor