datafusion-comet icon indicating copy to clipboard operation
datafusion-comet copied to clipboard

[wip] Add scripts for running benchmarks on EC2

Open andygrove opened this issue 8 months ago • 3 comments

Which issue does this PR close?

Part of https://github.com/apache/datafusion-comet/issues/1636

Rationale for this change

Make it easier for anyone to run the 100 GB benchmark on EC2 with local disk.

What changes are included in this PR?

Scripts and documentation.

How are these changes tested?

Manually.

andygrove avatar Apr 16 '25 13:04 andygrove

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 58.65%. Comparing base (f09f8af) to head (297925e). Report is 180 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #1654      +/-   ##
============================================
+ Coverage     56.12%   58.65%   +2.52%     
- Complexity      976     1142     +166     
============================================
  Files           119      129      +10     
  Lines         11743    12640     +897     
  Branches       2251     2363     +112     
============================================
+ Hits           6591     7414     +823     
- Misses         4012     4049      +37     
- Partials       1140     1177      +37     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

codecov-commenter avatar Apr 16 '25 14:04 codecov-commenter

The current status is that Comet is slower than Spark. Both Spark and Comet take an extremely long time to complete the benchmark (~50 minutes compared to 10 minutes for Spark and 5 minutes for Comet when running on my local workstation)

andygrove avatar Apr 16 '25 18:04 andygrove

That is very odd. We don't see the same in an EKS cluster with S3 storage. Is this consistently bad? Not a noisy neighbor issue, I hope?

parthchandra avatar Apr 16 '25 21:04 parthchandra

Working on this has not been a priority lately, so I'll close this for now. Thanks for the collaboration @anuragmantri.

andygrove avatar Jun 24 '25 17:06 andygrove