flink-cdc
flink-cdc copied to clipboard
Flink CDC is a streaming data integration tool
**Describe the bug** A clear and concise description of what the bug is. 2021-09-16 14:41:58,859 ERROR io.debezium.connector.mysql.BinlogReader [] - Failed due to error: Error processing binlog event org.apache.kafka.connect.errors.ConnectException: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed...
**Environment :** - Flink version : 1.13.5 - Flink CDC version: 2.2.0 - Database and version: 5.7.25-log Caused by: io.debezium.DebeziumException: Received DML 'DELETE FROM `db`.`table`' for processing, binlog probably contains...
**Describe the bug** A clear and concise description of what the bug is. 监控Oracle Logminer 正常输出 初始化数据后,程序异常关闭 报错: Caused by: java.lang.IllegalStateException: Retrieve schema history failed, the schema records for engine...
**Environment :** - Flink version : 1.14.0 - Flink CDC version: 2.2.3-snaphot - Database and version: mysql5.7 **Additional Description** Using flinkCDC 2.3-Snaphot to synchronize mysql data, multiple tables were synchronized...
数据精度丢失
使用flinkcdc读取mysql的float类型是数据丢失精度   并且当mysql的数据为date类型时,获取出来的数据为一个五位数的数字
#24 请问:我的疑问其实跟这个issue类似,我在实际运行mysql-cdc时,发现每次重启,cdc都会去把表中数据全量读取一遍,即使我已经在代码里设置了 `properties.setProperty("debezium.snapshot.mode", "never"); //schema_only也是一样的` 完整代码如下: ` import com.alibaba.ververica.cdc.connectors.mysql.MySQLSource; import com.alibaba.ververica.cdc.debezium.DebeziumSourceFunction; import com.alibaba.ververica.cdc.debezium.StringDebeziumDeserializationSchema; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import java.util.Properties; public class MySqlBinlogSourceExample { public static void main(String[] args) throws Exception { Properties...
This PR supports to checkpoint in chunk level at the snapshot phase.
**Describe the bug(Please use English)** The following error occurs when restarting the Flink cluster **Caused by: java.sql.SQLException: ORA-01292: no log file has been specified for the current LogMiner session** **Environment...
When the option to capture newly added tables is on, always discover captured tables.
**Describe the bug(Please use English)** The source of snapshot phase can support checkpoint in chunk level. If a failure happens, the source can be restored and continue to read chunks...