Grigori Fursin
Grigori Fursin
When running collaborative program optimization on mobile devices, our Android app should request only small data sets (i.e. < 1Mb), otherwise there is a very heavy traffic which can be...
when new crowdtuning engine is ready, gradually implement - program benchmarking - automatic program modelling (hierarchical execution time/energy/IPC modeling (collection of models), feature selection, run-time adaptation) - optimal algorithm selection...
Following many recent discussions at MLCommons about improving the repeatability and reproducibility of MLPerf inference benchmarks, we suggest to look at similar initiatives at computer systems conferences (artifact evaluation and...
Requested features: * CM scripts must show licenses and let user accept or reject them (should be used for the private MLPerf inference models too) * We should have variations...
* Aggregating all versions of detected/installed components in CM pipeline state and dumping it at the end - should record "names" + "tags" + "UID" + version to be able...
I think we added GCC as a default variation in the latest MLPerf inference CM workflow. I think we should remove it and keep LLVM by default. We can still...
Hi @arjunsuresh, Just not to forget, as we discussed yesterday, it may be useful to create a script that is called in post deps of the main CM-MLPerf script to...
The feedback from the MLCommons TF on automation and reproducibility to extend CM workflows to support the following MLC projects: - [x] check how to add network and multi-node code...
While finalizing [CM-MLPerf BERT inference benchmark tutorial for SCC'23](https://github.com/mlcommons/ck/blob/master/docs/tutorials/scc23-mlperf-inference-bert.md) here a few missing things that we can do later if/when we have time and resources: - [ ] Check that...
Following the feedback from SCC'23 CM-MLPerf users, we should brainstorm how to add standard logging mechanism to CM possibly using logging library ...