perf: improve the performance of TPC-H Q9
This query takes 104s on the dataset of scale factor 1. DuckDB only needs <1s. We should investigate the main overhead and optimize it.
I cloned risinglight and took a look today. I'm interested in the execution plans generated by Q9 duckdb and risinglight. I haven't had a chance to look closely yet. I noticed that the tree generated by duckdb is more "balanced", while the tree generated by risinglight is somewhat "skewed". I'm happy to continue working on this issue :smiley:
Hi @xiaguan , thanks for your interest! The reason for the "skewed" tree in RisingLight is that:
- The binder generates a left-deep tree of join nodes in the execution plan.
- In the optimizer, although we have a join-reordering rule to rotate this tree, it seems that this optimization doesn't work as we expected.
https://github.com/risinglightdb/risinglight/blob/604b4a1a2b2e7bbc2ac81341a0f99d9c65be3faf/src/planner/rules/plan.rs#L95-L99 As you can see, this rule is conditional and may not work if the condition is not met after predicate pushdown.
We can change it to an unconditional rule so that all combinations can be covered.
rw!("join-reorder";
"(join ?type ?cond2 (join ?type ?cond1 ?left ?mid) ?right)" =>
"(join ?type (and ?cond1 ?cond2) ?left (join ?type true ?mid ?right))"
),
Another defect of the optimizer is that we don't swap the children of a join node now. That is to say, (A join B) join C can not be reordered to B join (A join C).
A possible solution may be like this:
rw!("join-swap";
"(proj ?exprs (join ?type ?cond ?left ?right))" =>
"(proj ?exprs (join ?type ?cond ?right ?left))"
),
Notice that we put a proj node above the join, otherwise the new plan will have a different column order with the old plan, which breaks the equality semantic.
These are some of my ideas. But they have not been proven to work. If you are interested, feel free to continue this work!
https://github.com/risinglightdb/risinglight/blob/a0882cd996cba3a0bff74da5550c4566ed778a42/src/planner/rules/rows.rs#L24
- The first issue is that the estimation of scan in the
cost()function is not implemented correctly. This means that during the construction process of the egraph, the number of rows for a data table is defaulted to 1000. We may be able to solve this by using a global binder? - The second issue is that we cannot control our memory usage. When running q9 with 1gb on my machine, it can take up to 20gb of memory. I will continue to try to solve these two problems.
For the first issue, yes, we should provide row number information from storage to the optimizer. Currently, row number statistics are available in disk storage but not in memory storage.
For the second issue, we can reduce the memory usage by optimizing the hash join executor. Now it collects all input chunks from both side at the beginning (code). This can be refactored into a streaming style. Besides, a better join order may also help reduce the memory usage.
rw!("join-swap";
"(proj ?exprs (join ?type ?cond ?left ?right))" =>
"(proj ?exprs (join ?type ?cond ?right ?left))"
),
The rule you provided above works. My other attempts have all failed. I plan to take a look at other problems.