Anton Lokhmotov

Results 77 issues of Anton Lokhmotov

I'm running into an error with both the server and the client on [1a16663](https://github.com/mlcommons/power-dev/commit/1a16663c46a3726fd15b3a48e255cb968543776e) (i.e. the very last change before updating the supported PTD versions). ## Server After the ranging...

bug

During the MLPerf Inference v1.0 round, I noticed that the power workflow when used with CPU inference _occasionally_ seemed to incur a rather high overhead (~10%), for example: - Xavier...

Investigate

A single-channel analyzer such as [YOKOGAWA WT310E](https://tmi.yokogawa.com/eu/solutions/products/power-analyzers/digital-power-meter-wt300e/) with a breakout box such as [VOLTCRAFT SMA-10](https://www.conrad.com/p/voltcraftsma-10test-lead-adapterpg-plug-4-mm-socket-pg-connectorscoop-proofblack-123980) can measure systems up to 10 Amps (~2.5 kW). Similarly, a multi-channel analyzer such as...

documentation

We have [_very_ diligently followed](https://github.com/krai/ck-mlperf/blob/master/jnotebook/mlperf-inference-v3.0-reproduce-orin/reproduce-orin-inference-3-0-docker.ipynb) NVIDIA's [instructions for benchmarking Orin AGX](https://github.com/mlcommons/inference_results_v3.0/blob/main/closed/NVIDIA/README_Jetson.md), including flashing our unit with exactly the same images for the MaxP and MaxQ modes and getting exactly the...

Fixes #176. Consider Inference with and without Power measurements. Unify with Training (without Power).

A number of Preview systems in MLPerf Inference v4.0 used fewer cards than would be typical in production due to a limited availability of cards at the time. Rather than...

Next Meeting

The [LoRA](https://github.com/mlcommons/training/tree/master/llama2_70b_lora) reference implementation has a broken link to an Accelerate config file: > where the Accelerate config file is [this one](https://github.com/regisss/lora/blob/main/configs/default_config.yaml).