christ1ne
christ1ne
Currently there is no easy way for the table users to tell whether it is a FP32, INT8 or other numerical being used. This essentially discourages any FP32 submission but...
We will discuss after ML Commons Launch.
Let's think about the possibility of serving different models within the same server run for v0.7
The goal for v0.7 is to ensure all models in TF and PyTorch/ONNX are equivalent. Currently only SSD-Resnet-34 is found to be not equivalent.
The reference has padded to a fixed sequence length. Does submitted code need to do that or we can allow unpadding?
Would you consider the system to be the same if the main training chip is the same? It matters for internal logistical reasons. It is easier to reserve machines with...
Please see proposal here: https://docs.google.com/document/d/13jSiHOLamKR0u8cFdZwut1l9gE2kP8Wmh6Qwt0pq74Q/edit The current rule allows all sorts of optimizations, even if they does not generalize to non MLPerf benchmark models. Even if we end up deciding...
This is one of the suggested improvements from[ this doc](https://docs.google.com/document/d/1y8Yd1-RfmANF2QIg_IYO4ktVNZpfBOLS2xuYj65xS5Y/edit)
@tjablin @petermattson @DilipSequeira