Fix flaky SpillPool channel test by synchronizing reader and writer tasks
Which issue does this PR close?
- Closes #19058.
Rationale for this change
The spill_pool channel test test_reader_catches_up_to_writer was flaky due to non-deterministic coordination between the reader and writer tasks. The test used time-based sleeps and polling on shared state to infer when the reader had started and when it had processed a batch. Under varying scheduler timing, this could cause the reader to miss events or observe them in a different order, leading to intermittent failures where the recorded event sequence did not match expectations (for example, observing 3 instead of 5 reads).
Since this test verifies the correctness and wakeup behavior of the spill channel used by the spill pool, flakiness here undermines confidence in the spill mechanism and can cause spurious CI failures.
This PR makes the test coordination explicit and deterministic using oneshot channels, and also improves the usage example for the spill channel to show how to run writer and reader concurrently in a robust way.
What changes are included in this PR?
-
Example: concurrent writer and reader usage
-
Update the
spill_pool::channelusage example to:- Spawn writer and reader tasks concurrently instead of only spawning the writer.
- Use
writer.push_batch(&batch)?so the example returns aResultand propagates errors correctly. - Explicitly
drop(writer)at the end of the writer task to finalize the spill file and wake the reader. - Use
tokio::join!to await both tasks and map join errors intoDataFusionError::Execution. - Assert that the reader sees all expected batches (
batches_read == 5).
-
The updated example better demonstrates the intended concurrent usage pattern of the spill channel and ensures the reader is correctly woken when the writer finishes.
-
-
Test: make
test_reader_catches_up_to_writerdeterministic-
Introduce two
oneshotchannels:reader_waiting_tx/rxto signal when the reader has started and is pending on its firstnext()call.first_read_done_tx/rxto signal when the reader has completed processing the first batch.
-
In the reader task:
- Record
ReadStartand send onreader_waiting_txbefore awaitingreader.next(). - After successfully reading and recording the first batch, send on
first_read_done_tx. - Then read the second batch as before.
- Record
-
In the test body:
- Wait on
reader_waiting_rxinstead of sleeping for a fixed duration, ensuring the reader is actually pending before writing the first batch. - After the first write, wait on
first_read_done_rxbefore issuing the second write.
- Wait on
-
This establishes a precise and documented sequence of events:
- Reader starts and pends on the first
next(). - First write occurs, waking the reader.
- Reader processes the first batch and signals completion.
- Second write occurs.
- Reader starts and pends on the first
-
With this explicit synchronization, the event ordering in the test is stable and no longer depends on scheduler timing or arbitrary sleeps, eliminating the flakiness.
-
Are these changes tested?
Yes.
for i in {1..200}; do
echo "Run #$i started"
cargo test -p datafusion-physical-plan --profile ci --doc -q || break
echo "Run #$i completed"
done
-
The modified test
test_reader_catches_up_to_writercontinues to run as part of the existingspill_pooltest suite, but now uses explicit synchronization instead of timing-based assumptions. -
The test has been exercised repeatedly to confirm that:
- The expected read/write event sequence is stable across runs.
- The intermittent assertion failures (e.g., mismatched read counts such as
3vs5) no longer occur.
-
The updated example code compiles and type-checks by returning
datafusion_common::Resultfrom both spawned tasks and from the combinedtokio::join!result.
Are there any user-facing changes?
There are no behavior changes to the public API or spill pool semantics.
- The spill channel and spill pool behavior remains the same at runtime.
- Only the documentation/example and the internal test harness have been updated.
- No configuration flags or public methods were added, removed, or changed, so there are no breaking changes or documentation requirements beyond what is already updated inline.
LLM-generated code disclosure
This PR includes LLM-generated code and comments. All LLM-generated content has been manually reviewed and tested.
hi @xudong963 Can you take a look at this PR?
@alamb Thanks for your review.