Ruonan Wang
Ruonan Wang
Hi @ywang30intel , we have reproduced this error, and raise a PR to update our example script (remove `model = ipex.optimize(model.eval(), dtype="float16", inplace=True)` as we have added most optimizations for...
I have tried with latest transformers, and I still got the same error @Serizao 
The error is the same for `to(torch.device('xpu'))`. My version is `transformers 4.34.0.dev0`.  And this issue only occurs in jupyter notebook.
This issue seems to be model independent, as any data to ('xpu ') will cause kernel death. 
> What are printings of Jupyter Notebook backend in the terminal? Nothing special is outputed, it seems that it just restarting kernel directly. ```bash [C 2023-09-11 17:25:39.345 ServerApp] To access...
Got the same error when loading a model saving with torch.jit.optimize_for_inference ```bash File "C:\Users\ruonan\AppData\Local\miniconda3\envs\test\lib\site-packages\bigdl\nano\deps\ipex\ipex_inference_model.py", line 227, in _load model = torch.jit.load(checkpoint_path) File "C:\Users\ruonan\AppData\Local\miniconda3\envs\test\lib\site-packages\torch\jit\_serialization.py", line 162, in load cpp_module = torch._C.import_ir_module(cu,...
Hi, @tsantra Would you mind trying it again after `pip install accelerate==0.23.0` ?
Hi @tsantra , 1. Yes, it's supported on CPU, we will provide an official CPU example later. 2. After you got the merged model (for example `checkpoint-200-merged`), you can use...
Hi @tsantra , QLoRA CPU example is updated here(https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/QLoRA-FineTuning)
> @rnwang04 GPU finetuning suddenly stopped working and gave Seg Fault. Hi @tsantra , have you ever run GPU finetuning successfully ? or you always meet this error? If you...