Push the runtime filter from HashJoin down to SeqScan or AM.
+----------+ AttrFilter +------+ ScanKey +------------+ | HashJoin | ------------> | Hash | ---------> | SeqScan/AM | +----------+ +------+ +------------+
If "gp_enable_runtime_filter_pushdown" is on, three steps will be run:
Step 1. In ExecInitHashJoin(), try to find the mapper between the var in hashclauses and the var in SeqScan. If found we will save the mapper in AttrFilter and push them to Hash node;
Step 2. We will create the range/bloom filters in AttrFilter during building hash table, and these filters will be converted to the list of ScanKey and pushed down to Seqscan when the building finishes;
Step 3. If AM support SCAN_SUPPORT_RUNTIME_FILTER, these ScanKeys will be pushed down to the AM module further, otherwise will be used to filter slot in Seqscan;
perf: CPU E5-2680 v2 10 cores, memory 32GB, 3 segments
- tpcds 10s off: 865s on: 716s 17%
- tpcds 100s off: 4592s on: 3751s 18%
Fixes #ISSUE_Number
What does this PR do?
Type of Change
- [ ] Bug fix (non-breaking change)
- [ ] New feature (non-breaking change)
- [ ] Breaking change (fix or feature with breaking changes)
- [ ] Documentation update
Breaking Changes
Test Plan
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Passed
make installcheck - [ ] Passed
make -C src/test installcheck-cbdb-parallel
Impact
Performance:
User-facing changes:
Dependencies:
Checklist
- [ ] Followed contribution guide
- [ ] Added/updated documentation
- [ ] Reviewed code for security implications
- [ ] Requested review from cloudberry committers
Additional Context
⚠️ To skip CI: Add [skip ci] to your PR title. Only use when necessary! ⚠️
Looks interesting. And I have some questions to discuss.
- Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
- Looks only when the
hashjoinnode andseqscannode run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.
There are codes changed in MultiExecParallelHash, please add some parallel tests with runtime filter.
There are codes changed in MultiExecParallelHash, please add some parallel tests with runtime filter.
got it.
Looks interesting. And I have some questions to discuss.
- Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
- Looks only when the
hashjoinnode andseqscannode run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.
- Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
Theoretically, it is feasible to apply runtime filters to operators such as Index Scan. However, because Index Scan already reduces data volume by leveraging an optimized storage structure, the performance gains from applying runtime filters to Index Scan would likely be minimal. Thus, I think that applying runtime filters to Index Scan would not yield significant performance benefits.
In subsequent work, when we discover that other scan operators can achieve notable performance improvements from pushdown runtime filters, we will support these operators. Our focus will be on operators where runtime filters can substantially decrease the amount of data processed early in the query execution, leading to more pronounced performance enhancements.
- Looks only when the
hashjoinnode andseqscannode run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.
Yes, the current pushdown runtime filter only supports in-process pushdown, which means that the Hash Join and SeqScan need to be within the same process. The design and implementation of cross-process pushdown runtime filters are much more complex.
This limitation arises because coordinating and sharing data structures like Bloom filters or other runtime filters across different processes involves additional challenges such as inter-process communication (IPC), synchronization, and ensuring consistency and efficiency of the filters across process boundaries. Addressing these issues requires a more sophisticated design that can handle the complexities of distributed computing environments.
Hi, with gp_enable_runtime_filter_pushdown = on, execute SQL below will get a crash:
gpadmin=# show gp_enable_runtime_filter_pushdown;
gp_enable_runtime_filter_pushdown
-----------------------------------
on
(1 row)
CREATE TABLE test_tablesample (dist int, id int, name text) WITH (fillfactor=10) DISTRIBUTED BY (dist);
-- use fillfactor so we don't have to load too much data to get multiple pages
-- Changed the column length in order to match the expected results based on relation's blocksz
INSERT INTO test_tablesample SELECT 0, i, repeat(i::text, 875) FROM generate_series(0, 9) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 3, i, repeat(i::text, 875) FROM generate_series(10, 19) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 5, i, repeat(i::text, 875) FROM generate_series(20, 29) s(i) ORDER BY i;
EXPLAIN (COSTS OFF)
SELECT id FROM test_tablesample TABLESAMPLE SYSTEM (50) REPEATABLE (2);
FATAL: Unexpected internal error (assert.c:48)
DETAIL: FailedAssertion("IsA(planstate, SeqScanState)", File: "explain.c", Line: 4154)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
psql (14.4, server 14.4)
```sql gpadmin=# show gp_enable_runtime_filter_pushdown; gp_enable_runtime_filter_pushdown ----------------------------------- on (1 row)CREATE TABLE test_tablesample (dist int, id int, name text) WITH (fillfactor=10) DISTRIBUTED BY (dist); -- use fillfactor so we don't have to load too much data to get multiple pages -- Changed the column length in order to match the expected results based on relation's blocksz INSERT INTO test_tablesample SELECT 0, i, repeat(i::text, 875) FROM generate_series(0, 9) s(i) ORDER BY i; INSERT INTO test_tablesample SELECT 3, i, repeat(i::text, 875) FROM generate_series(10, 19) s(i) ORDER BY i; INSERT INTO test_tablesample SELECT 5, i, repeat(i::text, 875) FROM generate_series(20, 29) s(i) ORDER BY i; EXPLAIN (COSTS OFF) SELECT id FROM test_tablesample TABLESAMPLE SYSTEM (50) REPEATABLE (2); FATAL: Unexpected internal error (assert.c:48) DETAIL: FailedAssertion("IsA(planstate, SeqScanState)", File: "explain.c", Line: 4154) server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. psql (14.4, server 14.4)
Thanks, I'll reproduce the issue and fix it.
Thanks for your detailed explanation.
Looks interesting. And I have some questions to discuss.
- Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
- Looks only when the
hashjoinnode andseqscannode run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.
- Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
Theoretically, it is feasible to apply runtime filters to operators such as Index Scan. However, because Index Scan already reduces data volume by leveraging an optimized storage structure, the performance gains from applying runtime filters to Index Scan would likely be minimal. Thus, I think that applying runtime filters to Index Scan would not yield significant performance benefits.
Make sense. When doing hashjoin, index scan or index only scan are often not used on probe node.
In subsequent work, when we discover that other scan operators can achieve notable performance improvements from pushdown runtime filters, we will support these operators. Our focus will be on operators where runtime filters can substantially decrease the amount of data processed early in the query execution, leading to more pronounced performance enhancements.
- Looks only when the
hashjoinnode andseqscannode run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.Yes, the current pushdown runtime filter only supports in-process pushdown, which means that the Hash Join and SeqScan need to be within the same process. The design and implementation of cross-process pushdown runtime filters are much more complex.
This limitation arises because coordinating and sharing data structures like Bloom filters or other runtime filters across different processes involves additional challenges such as inter-process communication (IPC), synchronization, and ensuring consistency and efficiency of the filters across process boundaries. Addressing these issues requires a more sophisticated design that can handle the complexities of distributed computing environments.
Exactly, and if there is any lock used to solve the problem may even lead bad performance.
explain analyze
SELECT count(t1.c3) FROM t1, t3 WHERE t1.c1 = t3.c1 ;
QUERY PLAN
-----------------------------------------------------------------------------------------
----------------------------------------------
Finalize Aggregate (cost=1700.07..1700.08 rows=1 width=8) (actual time=32119.566..32119
.571 rows=1 loops=1)
-> Gather Motion 3:1 (slice1; segments: 3) (cost=1700.02..1700.07 rows=3 width=8) (
actual time=30.967..32119.550 rows=3 loops=1)
-> Partial Aggregate (cost=1700.02..1700.03 rows=1 width=8) (actual time=32119
.131..32119.135 rows=1 loops=1)
-> Hash Join (cost=771.01..1616.68 rows=33334 width=4) (actual time=14.0
59..32116.962 rows=33462 loops=1)
Hash Cond: (t3.c1 = t1.c1)
Extra Text: (seg0) Hash chain length 1.0 avg, 3 max, using 32439 o
f 524288 buckets.
-> Seq Scan on t3 (cost=0.00..387.34 rows=33334 width=4) (actual t
ime=0.028..32089.490 rows=33462 loops=1)
-> Hash (cost=354.34..354.34 rows=33334 width=8) (actual time=13.2
57..13.259 rows=33462 loops=1)
Buckets: 524288 Batches: 1 Memory Usage: 5404kB
-> Seq Scan on t1 (cost=0.00..354.34 rows=33334 width=8) (ac
tual time=0.180..4.877 rows=33462 loops=1)
Planning Time: 0.227 ms
runtime_filter has been pushed down to t3 table seqscan, but 'explain analyze' doesn't print them out.
\d t1
Table "public.t1"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
c1 | integer | | |
c2 | integer | | |
c3 | integer | | |
c4 | integer | | |
c5 | integer | | |
Checksum: t
Indexes:
"t1_c2" btree (c2)
Distributed by: (c1)
\d t3
Table "public.t3"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
c1 | integer | | |
c2 | integer | | |
c3 | integer | | |
c4 | integer | | |
c5 | integer | | |
Distributed by: (c1)
explain analyze SELECT count(t1.c3) FROM t1, t3 WHERE t1.c1 = t3.c1 ; QUERY PLAN ----------------------------------------------------------------------------------------- ---------------------------------------------- Finalize Aggregate (cost=1700.07..1700.08 rows=1 width=8) (actual time=32119.566..32119 .571 rows=1 loops=1) -> Gather Motion 3:1 (slice1; segments: 3) (cost=1700.02..1700.07 rows=3 width=8) ( actual time=30.967..32119.550 rows=3 loops=1) -> Partial Aggregate (cost=1700.02..1700.03 rows=1 width=8) (actual time=32119 .131..32119.135 rows=1 loops=1) -> Hash Join (cost=771.01..1616.68 rows=33334 width=4) (actual time=14.0 59..32116.962 rows=33462 loops=1) Hash Cond: (t3.c1 = t1.c1) Extra Text: (seg0) Hash chain length 1.0 avg, 3 max, using 32439 o f 524288 buckets. -> Seq Scan on t3 (cost=0.00..387.34 rows=33334 width=4) (actual t ime=0.028..32089.490 rows=33462 loops=1) -> Hash (cost=354.34..354.34 rows=33334 width=8) (actual time=13.2 57..13.259 rows=33462 loops=1) Buckets: 524288 Batches: 1 Memory Usage: 5404kB -> Seq Scan on t1 (cost=0.00..354.34 rows=33334 width=8) (ac tual time=0.180..4.877 rows=33462 loops=1) Planning Time: 0.227 msruntime_filter has been pushed down to t3 table seqscan, but 'explain analyze' doesn't print them out.
\d t1 Table "public.t1" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- c1 | integer | | | c2 | integer | | | c3 | integer | | | c4 | integer | | | c5 | integer | | | Checksum: t Indexes: "t1_c2" btree (c2) Distributed by: (c1)\d t3 Table "public.t3" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- c1 | integer | | | c2 | integer | | | c3 | integer | | | c4 | integer | | | c5 | integer | | | Distributed by: (c1)
Thanks for your test case. Based on these, I rewrote code to ensure that debug info are always displayed even when the number of filtered rows is zero. And add the test case into gp_runtime_filter.sql too. fix in https://github.com/apache/cloudberry/commit/98dac6dfc7d5e44e111aa16bdf5948d07ee2eb00
Hi, with gp_enable_runtime_filter_pushdown = on, execute SQL below will get a crash:
gpadmin=# show gp_enable_runtime_filter_pushdown; gp_enable_runtime_filter_pushdown ----------------------------------- on (1 row)CREATE TABLE test_tablesample (dist int, id int, name text) WITH (fillfactor=10) DISTRIBUTED BY (dist); -- use fillfactor so we don't have to load too much data to get multiple pages -- Changed the column length in order to match the expected results based on relation's blocksz INSERT INTO test_tablesample SELECT 0, i, repeat(i::text, 875) FROM generate_series(0, 9) s(i) ORDER BY i; INSERT INTO test_tablesample SELECT 3, i, repeat(i::text, 875) FROM generate_series(10, 19) s(i) ORDER BY i; INSERT INTO test_tablesample SELECT 5, i, repeat(i::text, 875) FROM generate_series(20, 29) s(i) ORDER BY i; EXPLAIN (COSTS OFF) SELECT id FROM test_tablesample TABLESAMPLE SYSTEM (50) REPEATABLE (2); FATAL: Unexpected internal error (assert.c:48) DETAIL: FailedAssertion("IsA(planstate, SeqScanState)", File: "explain.c", Line: 4154) server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. psql (14.4, server 14.4)![]()
Thanks for your test case. I fix it in https://github.com/apache/cloudberry/commit/98dac6dfc7d5e44e111aa16bdf5948d07ee2eb00 And add the test case into gp_runtime_filter.sql too.
Hi @zhangyue-hashdata I see that previous runtime filter implementation relies on some cost model at try_runtime_filter(). Do I understand it correctly, that this PR does not do any cost evaluation? Also for TPC-H/TPC-DS can you provide results for each query separately?
Asking mostly out of curiosity, I see here are quite a few reviewers here already :)
Hi @zhangyue-hashdata I see that previous runtime filter implementation relies on some cost model at try_runtime_filter(). Do I understand it correctly, that this PR does not do any cost evaluation? Also for TPC-H/TPC-DS can you provide results for each query separately?
Asking mostly out of curiosity, I see here are quite a few reviewers here already :)
Basically, you're correct. Because our goal is to filter out as much data as possible right at the point of data generation. However, this will lead to very complex evaluations, so we only made a simple estimation based on rows and work memory when creating the Bloom filter. Furthermore, I have placed the detailed test results for TPC-DS 10s in PR description.
There are codes changed in MultiExecParallelHash, please add some parallel tests with runtime filter.
fix it in https://github.com/apache/cloudberry/pull/724/commits/7ab040ae178e9bb616bd75e0488aa6a2293d4183
from tpcds 10s details table, there are some bad cases.
from tpcds 10s details table, there are some bad cases.
21,24,30,42,49,54,68-1,99
I retested these SQL statements that exhibited performance regression, and the latest test results show no noticeable performance difference when toggling gp_enable_runtime_filter_pushdown. So, I speculate that the performance regression in these SQL statements might be associated with testing method. Previously I tested by running the entire suite of 99 TPC-DS queries with gp_enable_runtime_filter_pushdown enabled, and then again with it disabled.
Therefore, a more appropriate method would be to execute the same SQL statement multiple times with gp_enable_runtime_filter_pushdown both enabled and disabled, respectively, and then take the average of those runs for comparison. I will follow this testing method for retesting and observe if there's any performance regression.
from tpcds 10s details table, there are some bad cases.
21,24,30,42,49,54,68-1,99
I retested these SQL statements that exhibited performance regression, and the latest test results show no noticeable performance difference when toggling gp_enable_runtime_filter_pushdown. So, I speculate that the performance regression in these SQL statements might be associated with testing method. Previously I tested by running the entire suite of 99 TPC-DS queries with gp_enable_runtime_filter_pushdown enabled, and then again with it disabled.
Therefore, a more appropriate method would be to execute the same SQL statement multiple times with gp_enable_runtime_filter_pushdown both enabled and disabled, respectively, and then take the average of those runs for comparison. I will follow this testing method for retesting and observe if there's any performance regression.
I have retested the performance of tpcds 10s using the previously mentioned testing method. Please see the description part for the latest results.
I have retested the performance of tpcds 10s using the previously mentioned testing method. Please see the description part for the latest results
cool, what about tpcds 100 sf ?
I have retested the performance of tpcds 10s using the previously mentioned testing method. Please see the description part for the latest results
cool, what about tpcds 100 sf ?
Please see the description part for the results of tpcds 100s.
@zhangyue-hashdata have you tried benchmarks on the recent builds? The last time I ran TPC-H against cloudberry, 12/22 queries used legacy optimizer. I guess for TPC-DS it's even worse. But now, after cherry-picks from gpdb master, more queries should be using ORCA optimizer. I wonder if there is still the same benefit from runtime bloom filters.
Btw, can you guys share TPC-DS and TPC-H toolkit you're using to benchmark cloudberry?
@zhangyue-hashdata have you tried benchmarks on the recent builds? The last time I ran TPC-H against cloudberry, 12/22 queries used legacy optimizer. I guess for TPC-DS it's even worse. But now, after cherry-picks from gpdb master, more queries should be using ORCA optimizer. I wonder if there is still the same benefit from runtime bloom filters.
Btw, can you guys share TPC-DS and TPC-H toolkit you're using to benchmark cloudberry?
not yet recently, here is toolkit for TPC-DS, hope helpful to you :-) toolkit.tar.gz
@zhangyue-hashdata squash commits into one commit .
@zhangyue-hashdata squash commits into one commit .
got it