db-benchmark
db-benchmark copied to clipboard
reproducible benchmark of database-like ops
Repository for reproducible benchmarking of database-like operations in single-node environment.
Benchmark report is available at h2oai.github.io/db-benchmark.
We focused mainly on portability and reproducibility. Benchmark is routinely re-run to present up-to-date timings. Most of solutions used are automatically upgraded to their stable or development versions.
This benchmark is meant to compare scalability both in data volume and data complexity.
Contribution and feedback are very welcome!
Tasks
- [x] groupby
- [x] join
- [x] groupby2014
Solutions
- [x] dask
- [x] data.table
- [x] dplyr
- [x] DataFrames.jl
- [x] pandas
- [x] (py)datatable
- [x] spark
- [x] cuDF
- [x] ClickHouse
- [x] Polars
- [x] Arrow
- [x] DuckDB
More solutions has been proposed. Status of those can be tracked in issues tracker of our project repository by using new solution label.
Reproduce
Batch benchmark run
- edit
path.envand setjuliaandjavapaths - if solution uses python create new
virtualenvas$solution/py-$solution, example forpandasusevirtualenv pandas/py-pandas --python=/usr/bin/python3.6 - install every solution, follow
$solution/setup-$solution.shscripts - edit
run.confto define solutions and tasks to benchmark - generate data, for
groupbyuseRscript _data/groupby-datagen.R 1e7 1e2 0 0to createG1_1e7_1e2_0_0.csv, re-save to binary format where needed (see below), createdatadirectory and keep all data files there - edit
_control/data.csvto define data sizes to benchmark usingactiveflag - ensure SWAP is disabled and ClickHouse server is not yet running
- start benchmark with
./run.sh
Single solution benchmark
- install solution software
- for python we recommend to use
virtualenvfor better isolation - for R ensure that library is installed in a solution subdirectory, so that
library("dplyr", lib.loc="./dplyr/r-dplyr")orlibrary("data.table", lib.loc="./datatable/r-datatable")works - note that some solutions may require another to be installed to speed-up csv data load, for example,
dplyrrequiresdata.tableand similarlypandasrequires (py)datatable
- for python we recommend to use
- generate data using
_data/*-datagen.Rscripts, for example,Rscript _data/groupby-datagen.R 1e7 1e2 0 0createsG1_1e7_1e2_0_0.csv, put data files indatadirectory - run benchmark for a single solution using
./_launcher/solution.R --solution=data.table --task=groupby --nrow=1e7 - run other data cases by passing extra parameters
--k=1e2 --na=0 --sort=0 - use
--quiet=trueto suppress script's output and print timings only, using--print=question,run,time_secspecify columns to be printed to console, to print all use--print=* - use
--out=time.csvto write timings to a file rather than console
Running script interactively
- install software in expected location, details above
- ensure data name to be used in env var below is present in
./datadir - source python virtual environment if needed
- call
SRC_DATANAME=G1_1e7_1e2_0_0 R, if desired replaceRwithpythonorjulia - proceed pasting code from benchmark script
Extra care needed
cudfusescondainstead ofvirtualenv
Example environment
- setting up r3-8xlarge: 244GB RAM, 32 cores: Amazon EC2 for beginners
- (slightly outdated) full reproduce script on clean Ubuntu 16.04: _utils/repro.sh
Acknowledgment
Timings for some solutions might be missing for particular data sizes or questions. Some functions are not yet implemented in all solutions so we were unable to answer all questions in all solutions. Some solutions might also run out of memory when running benchmark script which results the process to be killed by OS. Lastly we also added timeout for single benchmark script to run, once timeout value is reached script is terminated. Please check exceptions label in our repository for a list of issues/defects in solutions, that makes us unable to provide all timings. There is also no documentation label that lists issues that are blocked by missing documentation in solutions we are benchmarking.