lucas.wu

Results 7 comments of lucas.wu

应该是没有设置HADOOP_HOME

I found the reason, mainly because our binglog client was restarted during the consumption process, but the location of the restart was on the binlog of Write_rows type. As a...

1.我们可以从下面这部分代码看出问题,当出现MissingTableMapEventException这种报错的时候,程序不会抛出异常,而是继续执行,导致过程中Write_rows类型的数据被丢弃,最终导致数据丢失 In BinaryLogClient class,我发现对于MissingTableMapEventException会被捕捉,但是不会像EOFException、SocketException异常被再次抛出,意味着,如果我们在上层的监听器实现onEventDeserializationFailure不做处理的话,这个异常将不会再被处理。 private void listenForEventPackets() throws IOException { ....... } catch (Exception e) { Throwable cause = e instanceof EventDataDeserializationException ? e.getCause() : e; if (cause instanceof EOFException...

> I think we should add this logic to this method to let the client stop receiving messages, so that the queue poll() method can throw an exception ![image](https://github.com/ververica/flink-cdc-connectors/assets/22993360/64189850-5445-407c-b896-a42b39b5976a) please...

> @dylenWu The option to just ignore the exception is useful. I encountered a weird case where consuming from a later offset will throw the exception that the binlog file...

在消费write_rows类型的binlog数据的时候,一定要先消费table_map类型的binlog获取表相关的信息,不然无法解析write_rows类型的binlog。但是当BinlogClient因为心跳超时重新连接的时候,它消费的位点可能在事务的中间,导致没有先消费table_map类型的数据,而是直接消费了write_rows类型的数据。详情请看我上面截的图。