Wang, Mengni

Results 41 comments of Wang, Mengni

Same question, I found the readme said it just uses CenterCrop and I follow its statement but get only about 33% top1 accuracy. Need preprocess details to reproduce the result.

> Hi @jcwchen , existing int8 models are all generated with VNNI support.

Hi @paul-ang , we only support U8S8 by default because on x86-64 machines with AVX2 and AVX512 extensions, ONNX Runtime uses the VPMADDUBSW instruction for U8S8 for performance. I am...

Hi @pavelkochnev, thank you for your suggestions. ```python input = node_info[1][1] ``` is the weight name of node. If each node has its own weight, self.model.input_name_to_nodes[input] will return a list...

Hi @pavelkochnev , replacing the line will do make the code more readable. But in more detail, get_node will iterate nodes of the model until find it and its time...

Hi @pavelkochnev , sure, we will put this into our enhancement plan.

https://inteltf-jenk.sh.intel.com/job/intel-lpot-validation-top-mr-extension/3912/

this bug is fixed in https://github.com/intel/neural-compressor/pull/187, closed this PR

Hi, do you quantize HBONet with onnx format or torch format? I see you provide the information of torch but the log shows it is related to onnx.

@hoshibara Hi, do you quantize this model with static or dynamic quantization approach? If the program runs into line 46, the optype of parent[0] should be DynamicQuantizeLinear. Could you check...