Yuan

Results 100 comments of Yuan

@akiradeveloper thanks for the detail explanation. if I understand this correctly, under a power failure: - This will only affect the write-back policy - Since you're trying to aggregate io...

@akiradeveloper thanks a lot! One more question: if the application sends a flush request on each 4KB write, in the flush-job it would be at per-write(4KB) level or 512KB level?

@akiradeveloper Thanks for the detail answer! It helps a lot

@haojinIntel thanks for the clarify! ideally LLVM9 should also work

@huangxiaopingRD could you please have a try with the conda based env? https://github.com/oap-project/gazelle_plugin/tree/main/conda

Hi @Manoj-red-hat, are you testing with **Partitioned** TPCH dataset? Using partitioned table will introduce more overhead spark scheduling as it will generate too many tasks, especially on Q1 with small...

@Manoj-red-hat it looks like there are many small tasks in 1st stage, could you please have a try w/ larger partiton? e.g, `spark.sql.files.maxPartitionBytes=1073741824`

@PHILO-HE thanks for the reminder, the patch is outdate and it needs a rebase, will add the missing code in the rebase.

@weiting-chen PTAL