cudf
cudf copied to clipboard
Update chunked parquet reader benchmarks
Description
This PR addresses issue #15057 by contributing the following:
- Adds a new benchmark that provides nvbench axes for both chunk_read_limit and pass_read_limit
- Renames byte_limit to chunk_read_limit in BM_parquet_read_chunks
- Adds an nvbench axis for data_size to allow the benchmarks to operate on tables larger than 536 MB.
Checklist
- [x] I am familiar with the Contributing Guidelines.
- [x] New or existing tests cover these changes.
- [x] The documentation is up to date with these changes.