Anton Lokhmotov
Anton Lokhmotov
The observed behaviour is consistent across several runs: - Raspberry Pi 4 with power measurements: - TFLite v2.4.1: 606.3 ms ([power/testing](https://github.com/mlcommons/inference_results_v1.0/blob/master/closed/Krai/results/rpi4coral-fan.on-tflite-v2.4.1-ruy/resnet50/singlestream/performance/run_1/mlperf_log_summary.txt#L7)), 602.4 ms ([power/ranging](https://github.com/mlcommons/inference_results_v1.0/blob/master/closed/Krai/results/rpi4coral-fan.on-tflite-v2.4.1-ruy/resnet50/singlestream/performance/ranging/mlperf_log_summary.txt#L7)). - Raspberry Pi 4 without power...
@s-idgunji When running inference on the GPU, the CPU is typically not fully utilised, so you might not notice any difference. When running inference on the CPU in full-blast, the...
The overhead seems to be consistent. What's worrying is that it shows up with ArmNN on Xavier which is like 8x more powerful than RPi4.
[WT1800E](https://tmi.yokogawa.com/us/solutions/products/power-analyzers/wt1800e-high-performance-power-analyzer/) supports [up to 6 channels](https://cdn.tmi.yokogawa.com/2/267/images/WT1800E_Rear_no_BOX_M_copy_LG_1.jpg). 
## MaxQ | Workload | Results | Offline Performance, QPS | Offline Power, W | SingleStream Performance, ms | SingleStream Energy, J/stream | MultiStream Performance, ms | MultiStream Energy, J/stream...
> Can you please confirm if the Orin power runs are taken by connecting from a host machine? Just about the only thing we didn't do according to the provided...
Great point, @arjunsuresh. But even if the files are removed now, the repo size will remain the same. This should be done at the point of converting the results repo...
Can you be more specific please?
> except that the model must be trained I think "trained" should be replaced with "validated". > such as the submitters can choose their own training dataset except the validation...
Which test? Which errors?