*: ref all the jobs before sending to jobToWorkerCh
What problem does this PR solve?
Issue Number: close #64666
Problem Summary: In the generateAndSendJob function, we previously referenced one region job at a time and immediately sent it to jobToWorkerCh. If a job (especially an empty job) finished before the next job was referenced, the reference count could drop to 0, causing memoryIngestData to be cleaned unexpectedly. This prevented other jobs still being processed from accessing the data, leading to data inconsistency. In tidb-x scenarios, where region sizes are larger than the configured SplitRegionSize, some regions' jobs are empty. These empty jobs are processed quickly, which makes this issue more likely to occur.
What changed and how does it work?
We modified the logic in generateAndSendJob to reference all jobs before sending them to jobToWorkerCh. Specifically, we first call job.ref(jobWg) for all generated jobs in a loop, and then send them to the channel one by one in a separate loop.
This ensures all jobs have their reference counts incremented before any of them are sent to the channel. Even if some jobs (like empty jobs) finish quickly and call done(), as long as other jobs still hold references, ingestData will not be cleaned prematurely. Only when all jobs complete and call done() will the reference count reach 0, at which point ingestData can be safely cleaned, preventing data inconsistency issues.
Check List
Tests
- [x] Unit test
- [x] Integration test Pass the tidb-x HA add index test
- [ ] Manual test (add detailed scripts or steps below)
- [ ] No need to test
- [ ] I checked and no code files have been changed.
Side effects
- [ ] Performance regression: Consumes more CPU
- [ ] Performance regression: Consumes more Memory
- [ ] Breaking backward compatibility
Documentation
- [ ] Affects user behaviors
- [ ] Contains syntax changes
- [ ] Contains variable changes
- [ ] Contains experimental features
- [ ] Changes MySQL compatibility
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.
None
Codecov Report
:x: Patch coverage is 45.94595% with 20 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 70.2114%. Comparing base (8aefecd) to head (7136ac0).
:warning: Report is 13 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #64767 +/- ##
================================================
- Coverage 70.9290% 70.2114% -0.7176%
================================================
Files 1888 1890 +2
Lines 515954 522435 +6481
================================================
+ Hits 365961 366809 +848
- Misses 125582 132381 +6799
+ Partials 24411 23245 -1166
| Flag | Coverage Δ | |
|---|---|---|
| integration | 45.0599% <24.3243%> (-3.0946%) |
:arrow_down: |
| unit | 66.6995% <21.6216%> (+1.0354%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
| Components | Coverage Δ | |
|---|---|---|
| dumpling | 52.8700% <ø> (ø) |
|
| parser | ∅ <ø> (∅) |
|
| br | 39.3982% <ø> (-20.0267%) |
:arrow_down: |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
/retest
/retest
/retest
/retest
/retest
please also node in the detail that only partition table might meet this issue. as we are grouping keys by index id, not partition id, so some range might cross many partition, and the created region job might be empty and trigger this issue
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: D3Hunter, joechenrh
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~pkg/lightning/OWNERS~~ [D3Hunter]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
[LGTM Timeline notifier]
Timeline:
/retest
/retest
/retest
@wjhuang2016: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-unit-test-ddlv1 | baf0d5483fbe3c247469b34659602c4e766b9ddc | link | true | /test pull-unit-test-ddlv1 |
| pull-br-integration-test | baf0d5483fbe3c247469b34659602c4e766b9ddc | link | true | /test pull-br-integration-test |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
/retest
In response to a cherrypick label: new pull request created to branch release-8.5: #65025.
But this PR has conflicts, please resolve them!