snowflake-kafka-connector
snowflake-kafka-connector copied to clipboard
SNOW-630885 Schema Evolution
Schema evolution through alter table request sent from Kafka connector.
Add logic to alter table when we see "extra column" or "missing nonnullable value" error in the response from the Ingest SDK. We will the reopen the channel and retry insertion until the maximum number of attempts is reached.
Add tests related to schema evolution. Add tests related to the new behavior of automatic table creation with schema evolution enabled.
Codecov Report
Merging #472 (99c3914) into master (edcdeb7) will decrease coverage by
3.56%
. The diff coverage is32.11%
.
@@ Coverage Diff @@
## master #472 +/- ##
==========================================
- Coverage 87.13% 83.56% -3.57%
==========================================
Files 46 46
Lines 3948 4199 +251
Branches 414 444 +30
==========================================
+ Hits 3440 3509 +69
- Misses 350 522 +172
- Partials 158 168 +10
Impacted Files | Coverage Δ | |
---|---|---|
.../kafka/connector/SnowflakeSinkConnectorConfig.java | 86.54% <ø> (ø) |
|
...owflake/kafka/connector/records/RecordService.java | 86.97% <ø> (+0.19%) |
:arrow_up: |
...snowflake/kafka/connector/SchematizationUtils.java | 35.05% <8.33%> (-14.95%) |
:arrow_down: |
...nnector/internal/SnowflakeConnectionServiceV1.java | 70.07% <26.36%> (-10.41%) |
:arrow_down: |
...ctor/internal/streaming/TopicPartitionChannel.java | 77.99% <41.32%> (-13.49%) |
:arrow_down: |
...tor/internal/streaming/SnowflakeSinkServiceV2.java | 75.58% <66.66%> (+0.28%) |
:arrow_up: |
...lake/kafka/connector/internal/SnowflakeErrors.java | 97.02% <100.00%> (+0.12%) |
:arrow_up: |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Please add description in PR what the change is supposed to do. This PR is huge and some background would go long way.
Also, best practice is to split this PR into smaller ones so that you can get feedback. (It is fine since this is time bounded)
I made some change related to the updates in the SDK. Since it won't pass the merge gate, I just put it in a draft PR. https://github.com/snowflakedb/snowflake-kafka-connector/pull/479
Will try to see if we could use the schema in the record instead of getting it from schema registry, which limits us the converts that could be supported