Anton Lokhmotov

Results 273 comments of Anton Lokhmotov

Also trouble is expected with 2020.2: ``` In file included from /home/anton/CK_REPOS/ck-openvino/program/mlperf-inference-v0.5/ov_mlperf_cpu/backend_ov.h:7, from /home/anton/CK_REPOS/ck-openvino/program/mlperf-inference-v0.5/ov_mlperf_cpu/sut_ov.h:12, from /home/anton/CK_REPOS/ck-openvino/program/mlperf-inference-v0.5/ov_mlperf_cpu/main_ov.cc:13: /home/anton/CK_REPOS/ck-openvino/program/mlperf-inference-v0.5/ov_mlperf_cpu/infer_request_wrap.h: In member function ‘void InferReqWrap::postProcessImagenet(std::vector&, std::vector& , std::vector&)’: /home/anton/CK_REPOS/ck-openvino/program/mlperf-inference-v0.5/ov_mlperf_cpu/infer_request_wrap.h:129:42: warning: ‘void InferenceEngine::TopResults(unsigned int,...

Still waiting for Intel's response on these issues. cc: @christ1ne

@fenz Only a year has passed since I looked into this, wow. Feels more like forever :). I would have looked at the [v1.0](https://github.com/mlcommons/inference_results_v1.0/tree/master/closed/Intel/code) or [v0.7](https://github.com/mlcommons/inference_results_v0.7/tree/master/closed/Intel/code) code but alas Intel...

@yuyang782472282 My suggestion would be to report this elsewhere. This repository is for MLPerf Inference v0.5.

I'm running BERT experiments on an AWS [G4](https://aws.amazon.com/ec2/instance-types/g4) `g4dn.4xlarge` instance using a single T4. The supported clocks are a bit lower than in @vilmara's case: ```bash $ sudo nvidia-smi -q...

@nvpohanh Yes, that's the case. I suspect this VM instance takes a slice of a larger machine. Perhaps the neighbours are maxing out their GPUs :).

@XiaotaoChen Sorry, I'm not familiar with `int4_offline`. I assume it's a build product from NVIDIA's submission? /cc @nvpohanh

@XiaotaoChen You need to use this [script](https://github.com/mlperf/inference/blob/master/v0.5/classification_and_detection/tools/accuracy-imagenet.py). See [here](https://github.com/mlperf/inference/tree/master/v0.5/classification_and_detection#validate-accuracy-for-resnet50-and-mobilenet-benchmarks) how to run it. You can obtain the labels file (`val.txt`) with Collective Knowledge as follows: ```bash $ python -m pip...

Hi @osaman88, Thank you for spotting that, you are totally right. We used to show accuracy in the [v0.5 results](https://mlperf.org/inference-results-0-5), but somehow missed that from the [v0.7 results](https://mlperf.org/inference-results-0-7). There's a...

@1208overlord SSD-MobileNet-v2 is not an MLPerf Edge/Datacenter model. Why are you asking here?