XinyuYe-Intel
XinyuYe-Intel
Hi @jcwchen , I have tested in my local linux env with the command `python workflow_scripts/test_models.py --target onnxruntime` and passed all tests, but it was failed here. Could you please...
> Hi @XinyuYe-Intel, Thanks for letting me know this issue. Do your Linux machine have VNNI (avx512) support? No, I use Xeon Gold 6248 processor. Checked /proc/cpuinfo, 'avx512_vnni' is absent.
> > No, I use Xeon Gold 6248 processor. Checked /proc/cpuinfo, 'avx512_vnni' is absent. > > Probably that's why current CI fails because I believe most of GitHub action machines...
> Sorry for getting back to you late. I just merged my PR to improve the CIs: #526. Ideally the CI should be consistent now (skip ORT test if the...
> Thanks for the context! Could you please regenerated the test_data_set in a linux machine with VNNI support? Then it should pass the CIs. Sure, I'll try it.
> Thanks for updating the outpub.pb! but the updated one is still not reproducible in the CI machine which has avx512 support and the difference seems quite a little... I...
Hi @Michael-Fuu , are you looking at this [self distillation example](https://github.com/intel/neural-compressor/blob/master/examples/pytorch/image_recognition/torchvision_models/self_distillation/eager/main.py)? As you can see in the code below, this example does not follow the layer names (such as layer1.0.conv1...
Hi @Yuan0320 , thanks for using ITREX. Regarding `combine pretrained weights (WOQ) with LoRA adapter (FP32/16) for inference`, do you mean to add LoRA adapter (FP32/16) on top of the...
> Hi @XinyuYe-Intel, thanks for the quick reply and insight, it makes sense. I initially meant the former case, as I want to keep the high precision in adapter to...
Hi @cerisara , thanks for your finding. Our API has changed a little bit, but we haven't been able to update the usage in the README accordingly in time, sorry...