PatricZhao

Results 7 comments of PatricZhao

**CPU Int8 solution** - enable more models - enhance the flexibility - performance improvement for small batchsize

@arunbuduri sorry, I just found this issue and is the issue still in there? Could you try to build MXNet with below method? https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md#2

What's plan of the validation and user tutorial? Does the example still work?

Thanks for the contribution. It's nice to see the wavenet examples in MXNet. @seujung would you mind providing some data about the training/inference accuracy, performance on CPU/GPU for this example?

@ThomasDelteil @safrooze @wkcn @vandanavk please help review again. Adding more examples is a really good thing for the community and look forward the PR can be merged soon. @juliusshufan could...

How do you test DNNL performance? Could you put the HW/SW configuration and DNNL verbose log?

Thanks for the information @XapaJIaMnu For DNNL, you can use `benchdnn` in the Intel repo and the 2nd generation of Intel Xeon is the preferred platform for testing INT8, like...