ck
ck copied to clipboard
Why the results both on CPU and GPU are same?
When I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open default-reference-gpu-onnxruntime-v1.17.1-default_config +---------+----------+----------+--------+-----------------+---------------------------------+ | Model | Scenario | Accuracy | QPS | Latency (in ms) | Power Efficiency (in samples/J) | +---------+----------+----------+--------+-----------------+---------------------------------+ | bert-99 | Offline | X () | 44.157 | - | | +---------+----------+----------+--------+-----------------+---------------------------------+
when I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cpu --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open default-reference-gpu-onnxruntime-v1.17.1-default_config +---------+----------+----------+--------+-----------------+---------------------------------+ | Model | Scenario | Accuracy | QPS | Latency (in ms) | Power Efficiency (in samples/J) | +---------+----------+----------+--------+-----------------+---------------------------------+ | bert-99 | Offline | X () | 44.157 | - | | +---------+----------+----------+--------+-----------------+---------------------------------+ I change the "--device" Why the results is same?
There may be several potential issues with CUDA run - if CUDA is not installed properly or fails on your system, ONNX may revert to CPU. I didn't see that cases before but assume that this is the case. Also, we usually do not mix CPU and CUDA installation so you need to clean CM cache in between such runs:
cm rm cache -f
Maybe you can clean the cache and rerun above command with --device=cuda and submit the full log? We may need to handle better such cases ... Thanks a lot again for your feedback - that helps us improve CM for everyone!
After running the command(cm rm cache -f), I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open the errors are following.
GPU Device ID: 0 GPU Name: NVIDIA GeForce RTX 4070 Laptop GPU GPU compute capability: 8.9 CUDA driver version: 12.2 CUDA runtime version: 12.4 Global memory: 8585216000 Max clock rate: 1980.000000 MHz Total amount of shared memory per block: 49152 Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1536 Maximum number of threads per block: 1024 Max dimension size of a thread block X: 1024 Max dimension size of a thread block Y: 1024 Max dimension size of a thread block Z: 64 Max dimension size of a grid size X: 2147483647 Max dimension size of a grid size Y: 65535 Max dimension size of a grid size Z: 65535
Detected version: 24.0
! cd /home/zhaohc/CM/repos/local/cache/07081a5ef7a04a4a
! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
! call "detect_version" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
Detected version: 5.1
! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
Generating SUT description file for default-onnxruntime HW description file for default not found. Copying from default!!! ! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-mlperf-inference-sut-description/customize.py
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘void mlperf::logging::AsyncLog::RecordTokenCompletion(uint64_t, std::chrono::_V2::system_clock::time_point, mlperf::QuerySampleLatency)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:483:61: warning: unused parameter ‘completion_time’ [-Wunused-parameter]
483 | PerfClock::time_point completion_time,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector
SUT: default-reference-gpu-onnxruntime-v1.17.1-default_config, model: bert-99, scenario: Offline, target_qps updated as 44.1568 New config stored in /home/zhaohc/CM/repos/local/cache/9039508f728b4d64/configs/default/reference-implementation/gpu-device/onnxruntime-framework/framework-version-v1.17.1/default_config-config.yaml [2024-03-20 20:08:05,501 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt. [2024-03-20 20:08:05,506 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.