neural-compressor
neural-compressor copied to clipboard
Framework is not detected correctly from model format
The imput model format is onnx, and the framework onnxruntime can not detect the model format.
Hi, could you provide your launcher codes and installed pkg to help locate problems?
Hi, could you provide your launcher codes and installed pkg to help locate problems?
Here is the link of code and data: https://drive.google.com/file/d/1ep0jKgnodvN8mL1E6Fsr08B2T5qCkX46/view?usp=sharing and thanks for your help.
Do you install onnxruntime-extensions?
Do you install onnxruntime-extensions?
I have just install this package, but it still cant get the proper quantitized model.
Sorry for my late reply, could you paste your error info? I try your script and it shows INC can detect model correctly but the input format is incorrect.
Here is my error info. Thanks for your help.
This bug is caused by inference time collection. Since you don't provide eval_func or eval_dataloader+metric, it generates a fake eval_func automatically and it seems to jump in and jump out this eval_func at the same time... you can change line137 of objective.py to self.duration >= 0 locally to fix this problem and we will fix it in next release.