inference
inference copied to clipboard
Add automotive benchmarking setting
- Add
server_constant_genparameter to the test_settings - Add
server_constant_genparameter to the python API - Let
server_constant_genbe able to be loaded from the config files (mlperf.confanduser.conf) usingFromConfig - For the server scenario, the query scheduling distribution from poisson to constant when the parameter
server_constant_genis set to1, i.e:- When
server_constant_gen = 0number of queries in a fixed time follows a poisson distribution (with meanqps) and time between queries follows an exponential distribution (with mean1/qps). - When
server_constant_gen = 1time between queries is constant equal to1/qps
- When
- Custom Sample Grouping
- In order to run the a benchmark in a grouped setting:
- Construct a QSL using
ConstructGroupedQSL- Pass a vector that contains the group sizes in the order they appear in the dataset
- Start running the test using
StartTestWithGroupedQSL - Set the variable
use_grouped_qslin the mlperf.conf - Run your benchmark in the server scenario
- Construct a QSL using
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅