jdbc-connector-for-apache-kafka
jdbc-connector-for-apache-kafka copied to clipboard
Handle deletes on tombstone messages
When using the JDBC sink connector it would be useful if we could delete rows on tombstone messages.
This could be very useful when doing CDC from another database. I implemented the case with PG-> Debezium-> Kafka-> JDBC sink -> Mysql
And the pipeline works with "soft deletes" (adding a column __is_deleted
and setting it to true
) but not with hard deletes. That can be achieved using the New Record State transformation and changing the transforms.unwrap.delete.handling.mode
parameter
We could add it as additional parameter not to break compatibility
I wonder if it is possible to reach the same capacity to delete rows as confluent jdbc driver. Basically having delete.enabled=true in combination with pk_mode=record_key is a well documented way to process tombstones.