kafka-connect-jdbc
kafka-connect-jdbc copied to clipboard
Add Redshift dialect implementation
Problem
The current implementation of kafka-connect-jdbc does not properly support Redshift. Adding multiple columns does not work. Redshift converts the TEXT type to VARCHAR(256) by default. For many data, this is a short length. Converts the date from Debezium to TEXT.
Solution
Creating a dialect for Redshift based on the modified PostgreSQL dialect.
Does this solution apply anywhere else?
- [x] yes
- [ ] no
If yes, where?
In the datamesh functionality of the EBAC organization https://ebaconline.com.br/
Test Strategy
Added several modules to check the Redshift language. Manual testing was carried out on the EBAC developer platform.
Testing done:
- [x] Unit tests
- [ ] Integration tests
- [ ] System tests
- [x] Manual tests
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
Thanks for the review @jirufik .. At this point we aren't looking at new dialects. @NathanNam , can you plz confirm if this something we want to add? Thanks!
I think there is no point in submitting improvements for this. As you can see confluent is not interested. There is an apache fork maintained by another team that we all need to switch to.
See https://github.com/aiven/jdbc-connector-for-apache-kafka