neural-compressor icon indicating copy to clipboard operation
neural-compressor copied to clipboard

Framework is not detected correctly from model format

Open Charles-258 opened this issue 2 years ago • 7 comments

The imput model format is onnx, and the framework onnxruntime can not detect the model format.

Charles-258 avatar Jul 22 '22 06:07 Charles-258

Hi, could you provide your launcher codes and installed pkg to help locate problems?

mengniwang95 avatar Jul 22 '22 07:07 mengniwang95

Hi, could you provide your launcher codes and installed pkg to help locate problems?

Here is the link of code and data: https://drive.google.com/file/d/1ep0jKgnodvN8mL1E6Fsr08B2T5qCkX46/view?usp=sharing and thanks for your help.

Charles-258 avatar Jul 22 '22 09:07 Charles-258

Do you install onnxruntime-extensions?

mengniwang95 avatar Jul 22 '22 09:07 mengniwang95

Do you install onnxruntime-extensions?

I have just install this package, but it still cant get the proper quantitized model.

Charles-258 avatar Jul 22 '22 09:07 Charles-258

Sorry for my late reply, could you paste your error info? I try your script and it shows INC can detect model correctly but the input format is incorrect. image

mengniwang95 avatar Jul 26 '22 02:07 mengniwang95

Here is my error info. Thanks for your help. QQ图片20220727133140

Charles-258 avatar Jul 27 '22 05:07 Charles-258

This bug is caused by inference time collection. Since you don't provide eval_func or eval_dataloader+metric, it generates a fake eval_func automatically and it seems to jump in and jump out this eval_func at the same time... you can change line137 of objective.py to self.duration >= 0 locally to fix this problem and we will fix it in next release.

mengniwang95 avatar Jul 28 '22 09:07 mengniwang95