Update chunked parquet reader benchmarks
Description
This PR addresses issue #15057 by contributing the following:
- Adds a new benchmark that provides nvbench axes for both chunk_read_limit and pass_read_limit
- Renames byte_limit to chunk_read_limit in BM_parquet_read_chunks
- Adds an nvbench axis for data_size to allow the benchmarks to operate on tables larger than 536 MB.
Checklist
- [x] I am familiar with the Contributing Guidelines.
- [x] New or existing tests cover these changes.
- [x] The documentation is up to date with these changes.
This pull request requires additional validation before any workflows can run on NVIDIA's runners.
Pull request vetters can view their responsibilities here.
Contributors can view more details about this message here.
/ok to test
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
/ok to test 8a38032a9359a52548867894b02f12cdee15859d
/ok to test 5932165b60ae2fba9e5da3e27412c527be56b7e3
/merge