flink-cdc
flink-cdc copied to clipboard
[mysql] When splitSize setting small, error
Environment :
- Flink version : 1.13.5
- Flink CDC version: 2.2.0
- Database and version: 5.7.36.-log
code:
MySqlSource<String> mySqlSource = MySqlSource.<String>builder()
.splitSize(10)
.hostname("192.168.3.111")
.port(3306)
.username("root")
.password("root")
.databaseList("flink")
.tableList(new String[]{
"flink.400w" // 4_000_000 data
})
.startupOptions(StartupOptions.initial())
.deserializer(new JsonDebeziumDeserializationSchema())
.serverId("6000-6010")
.build();
When setting splitSize = 10 ,throw:
19:48:07,645 ERROR org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL: Thread 'SourceCoordinator-Source: source' produced an uncaught exception. Stopping the process...
java.lang.Error: Source Coordinator Thread already exists. There should never be more than one thread driving the actions of a Source Coordinator. Existing Thread: Thread[SourceCoordinator-Source: source,5,main]
at org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider$CoordinatorExecutorThreadFactory.newThread(SourceCoordinatorProvider.java:119) [flink-runtime_2.11-1.13.5.jar:1.13.5]
at java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:619) ~[?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) ~[?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025) ~[?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) ~[?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
But setting 8000 is ok
same problem
Small split size will introduce more splits and lead to OOM, check https://github.com/ververica/flink-cdc-connectors/issues/865 for more details.
You can solve the problem by increasing JM memory or upgrading cdc version.
Closing this issue because it was created before version 2.3.0 (2022-11-10). Please try the latest version of Flink CDC to see if the issue has been resolved. If the issue is still valid, kindly report it on Apache Jira under project Flink with component tag Flink CDC. Thank you!