Anton Lokhmotov

Results 273 comments of Anton Lokhmotov

As an added bonus, we already support [ACL package](https://github.com/ctuning/ck-math/tree/master/package/lib-acl-dev-universal) and [crowdbenchmarking across mobile devices](http://cknowledge.org/repo/web.php?template=cknowledge&action=index&module_uoa=wfe&native_action=show&native_module_uoa=program.optimization).

@OAIL How is the effort looking to you? :)

Another idea would be to create a representative subset for performance runs, i.e. with sample distribution similar to the original dataset, but only 1/10th the size.

**TL;DR:** fiftyone==0.16.5 mlperf-inference-source==2.1 gets things back in shape. Rather unhelpfully, [fiftyone](https://pypi.org/project/fiftyone/#history) introduced a new 0.19.0 release just a few days ago, which seems to break downloads even with the r2.1...

@arjunsuresh That's great! How about memory consumption?

For anyone still interested in this issue, note the right explicit window size and stride are: https://github.com/dividiti/ck-tensorrt/tree/master/package/caffemodel-deepscale-squeezenet-1.0-explicit-window-global-pooling ``` > kernel_size: 15 > stride: 15 ``` https://github.com/dividiti/ck-tensorrt/tree/master/package/caffemodel-deepscale-squeezenet-1.1-explicit-window-global-pooling ``` > kernel_size: 14...

I've just hit the same issue. As you suggest, we may need `soft:lib.gl.tegra` that detects `tegra/LibGL.so`, and a dependency on it in `package:lib-jetson-inference-dusty-nv-master-cuda`.

Of course, CK cannot generally know which CUDA version should be used. In this case, however, `tensorrt-time`'s dependency `jetson-inference` was compiled with CUDA 9.0 specifically to avoid this issue. Still,...

@ballalan Do you actually need to run in a Linux VM? The workflow should work under Windows.

Hi Alan, please try: ``` $ ck pull repo:ck-env ``` We will need to check why this repo has not been pulled in automatically.