incubator-gluten icon indicating copy to clipboard operation
incubator-gluten copied to clipboard

[VL] TPCDS Performance drop after new operator "VeloxAppendBatches"

Open Surbhi-Vijay opened this issue 1 year ago • 16 comments

Backend

VL (Velox)

Bug description

We have observed performance drop in TPCDS runs after the patch #6009.

Top regressing Queries

QueryId New runtime Previous Runtime query64 50712 22841 query24a 44883 27452 query24b 45003 28742

When we disabled the feature using "spark.gluten.sql.columnar.backend.velox.coalesceBatchesBeforeShuffle": "false". We see the same runtime as previous runs.

We are using azure cluster and reading data from remote storage account. The regression is seen in VeloxAppendBatches where in some instances, it is taking a lot of time

Below are the plan snippets from query64

image

image

Spark version

Spark-3.4.x

Spark configurations

No response

System information

No response

Relevant logs

No response

Surbhi-Vijay avatar Aug 02 '24 11:08 Surbhi-Vijay

cc @zhztheplayer @zhli1142015

Surbhi-Vijay avatar Aug 02 '24 11:08 Surbhi-Vijay

Thank you for reporting.

IIUC The operator itself doesn't seem to be the one that slows down your query? Say the slowest tasks just took 438ms and 166ms.

Would you like to share more parts of comparisons of the DAGs? Especially, would you check shuffle write time as well?

zhztheplayer avatar Aug 02 '24 13:08 zhztheplayer

So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test.

The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this.

@zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native.

image

FelixYBW avatar Aug 02 '24 22:08 FelixYBW

Thanks @FelixYBW for explaining. I am trying to come up with a minimal query to showcase the impact. If I do not achieve the same then will post the detailed analysis of Q64 and Q24.

Surbhi-Vijay avatar Aug 05 '24 13:08 Surbhi-Vijay

We investigated the Regressing queries. We see that the regression is not directly caused by "VeloxAppendBatches" but rather due to plan changes when data size is reduced due to this feature.

Analysis for query24b When the coalesceBatch feature is enabled, the data size get reduced which prompted AQE to change join type from SHJ to BHJ.

Left Side => With CoalesceBatch enabled Right Side => With CoalesceBatch disabled

image

here, ColumnarBroadcastExchange is taking additional 8s after the ColumnarExchange. image

At another place, we observe that one of the shuffle hash join build side got changed due to the same reason.

Surbhi-Vijay avatar Aug 09 '24 10:08 Surbhi-Vijay

@zhztheplayer @marin-ma Why the patch caused the plan changing? Looks a bug. Do we use batch numbers instead of row numbers as the plan creating?

FelixYBW avatar Aug 09 '24 19:08 FelixYBW

So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test.

The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this.

@zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native.

image

Hi! Could you share the profiling tool in this image? Thanks!

DamonZhao-sfu avatar Aug 12 '24 18:08 DamonZhao-sfu

@Surbhi-Vijay

Was there a large difference on shuffle write size?

zhztheplayer avatar Aug 14 '24 08:08 zhztheplayer

@Surbhi-Vijay Could you share the metrics details of ColumnarExchange with "VeloxAppendBatches" enabled/disabled?

marin-ma avatar Aug 14 '24 09:08 marin-ma

@Surbhi-Vijay Could you share the metrics details of ColumnarExchange with "VeloxAppendBatches" enabled/disabled?

Below "ColumnarExchange" is for join (store_sales join customer) which converted to BHJ from SHJ in q24b when veloxAppendBatches is enabled.

Left Side => With CoalesceBatch enabled Right Side => With CoalesceBatch disabled image

Surbhi-Vijay avatar Aug 14 '24 18:08 Surbhi-Vijay

@Surbhi-Vijay

Was there a large difference on shuffle write size?

It is reporting same numbers, does not seems to be any diff in shuffle write size. image

Surbhi-Vijay avatar Aug 14 '24 19:08 Surbhi-Vijay

@Surbhi-Vijay The "data size" metric changed from 58.7M to 5.2M. This could cause a plan change since the Join operation relies on this value to decide whether to use BHJ. However, a 10x reduction in "data size" seems unreasonable to me.

Could you also share the spark configurations? I've compared TPCDS q24b with enable/disable VeloxResizeBatches, but I don't see such a stage producing different data size.

marin-ma avatar Aug 16 '24 12:08 marin-ma

@Surbhi-Vijay Any update? It doesn't make sense the merge batch operators impact the shuffle data size.

FelixYBW avatar Aug 19 '24 22:08 FelixYBW

@FelixYBW @marin-ma I see this behavior of reduced data size wherever VeloxAppendBatches is getting applied.

All other metrics (apart from datasize) are almost same (except #batches and #rows/batch - which are expected to change)

The shuffle stage also shows the almost exact same metrices. At this point, I am suspecting if there is any bug in populating data size when this feature is enabled.

Surbhi-Vijay avatar Aug 20 '24 11:08 Surbhi-Vijay

Do we have a solution for this? Does https://github.com/apache/incubator-gluten/pull/6670 solve this issue?

JunhyungSong avatar Oct 25 '24 22:10 JunhyungSong

Do we have a solution for this? Does #6670 solve this issue?

I think so. Would you like to help have a try? If it works then we can close this issue.

cc @Surbhi-Vijay

zhztheplayer avatar Oct 26 '24 01:10 zhztheplayer