flink-cdc
flink-cdc copied to clipboard
[fix-issue-2676] repair a snapshot-split bug:
when use splitOneUnevenlySizedChunk, sometimes(for example, a database or a table or a column use 'utf8mb4_general_ci') we will get a big chunk. For example when the value of primaryKey like ['0000','1111','2222','3333','4444','aaaa','bbbb','cccc','ZZZZ',...] almost all of the values which starts with a [a-zA-z] will be split into a big chunk;
Hi @AidenPerce, could you please rebase this PR with latest master
branch before it could be merged? Renaming like com.ververica.cdc
to org.apache.flink.cdc
might be necessary.
Hi @AidenPerce, could you please rebase this PR with latest
master
branch before it could be merged? Renaming likecom.ververica.cdc
toorg.apache.flink.cdc
might be necessary.
I have rebase the branch of this pr, but there are some problems in test-case of db2-connector, that caused i can't update this pr. I don't know why the case couldn't pass, this is the check log: https://github.com/AidenPerce/flink-cdc-connectors/actions/runs/8963238039/job/24613239338
Hi @AidenPerce, sorry about the inconvenience. I think it's related to a glitch in Db2 incremental connector and should be fixed by #3283. Will try to get it merged asap.
Hi @AidenPerce, the Db2 CI fix has been merged into master
branch, could you please rebase this PR and see if this problem persists? Thank you!
Hi @AidenPerce, the Db2 CI fix has been merged into
master
branch, could you please rebase this PR and see if this problem persists? Thank you!
I get a new problem, there is a tcase of mongodb with a failure: NewlyAddedTableITCase.testRemoveAndAddCollectionsOneByOne:330->testRemoveAndAddCollectionsOneByOne:501 expected:<15> but was:<16> like https://github.com/AidenPerce/flink-cdc-connectors/actions/runs/8965612344/job/24619438529
But i re-run the test-case successfully. Are there any issues with this test case?
@yuxiqian may help to merge it please ?
Hi @AidenPerce, is there any updates on this PR? Feel free to comment here if you need any help.
This pull request has been automatically marked as stale because it has not had recent activity for 60 days. It will be closed in 30 days if no further activity occurs.