jdbc-connector-for-apache-kafka icon indicating copy to clipboard operation
jdbc-connector-for-apache-kafka copied to clipboard

Handle deletes on tombstone messages

Open ftisiot opened this issue 2 years ago • 1 comments

When using the JDBC sink connector it would be useful if we could delete rows on tombstone messages.

This could be very useful when doing CDC from another database. I implemented the case with PG-> Debezium-> Kafka-> JDBC sink -> Mysql

And the pipeline works with "soft deletes" (adding a column __is_deleted and setting it to true) but not with hard deletes. That can be achieved using the New Record State transformation and changing the transforms.unwrap.delete.handling.mode parameter

We could add it as additional parameter not to break compatibility

ftisiot avatar Jun 27 '22 10:06 ftisiot

I wonder if it is possible to reach the same capacity to delete rows as confluent jdbc driver. Basically having delete.enabled=true in combination with pk_mode=record_key is a well documented way to process tombstones.

davengeo avatar Sep 19 '22 07:09 davengeo