Jin, Qiao

Results 13 comments of Jin, Qiao

> Sorry, on our Windows A770 machines, A770 are all the default xpu device, so we cannot reproduce this error. > > You can change `'xpu:1'` back to `'xpu'` and...

Hi, aoke79, We recently added the Phi-3 example for both CPU and GPU. Could you please try it to see if it works? Here's link for phi-3 GPU example: [link](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3)....

Hi @JamieVC! Could you please try to reduce the max length of each text (e.g. 120) and try again? ![image](https://github.com/intel-analytics/ipex-llm/assets/89779290/e8b38381-cb49-4713-ad94-6af8cd2f3358) Feel free to ask if there's any further problem. :)

Hi, zeminli! Looks like it is possibly caused by GPU driver. Please update your GPU driver and try it again. If it still crashes, please try to run [this script](https://github.com/intel-analytics/BigDL/blob/main/python/llm/scripts/env-check.bat)...

Hi zeminli, Sorry that we didn't test bigdl-llm on this type of iGPU, so we couldn't reproduce this problem nor give a feasible solution. But you still can run bigdl-llm...

Hi, @aitss2017! We've updated the [glm-4v example on GPU](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HuggingFace/Multimodal/glm-4v). Could you please retry with instructions in latest example to see if the error still exists? Please feel free to ask...

Hi, @tristan-k, We are now trying to reproduce this issue on device with similar specifications. We will inform you as soon as possible if there is any progress. Please feel...

Hi, @tristan-k, Could you please try instructions in following link to see if it works? https://github.com/intel-analytics/ipex-llm/issues/11568#issuecomment-2227157685 Please feel free to ask if there is any further problems : )

> @JinBridger I already did that. It did not make any difference, as previously mentioned in another [comment](https://github.com/intel-analytics/ipex-llm/issues/11521#issuecomment-2227419727) of mine. Hi, @tristan-k, Could you please try to skip installing `intel-i915-dkms`...

Hi, @js333031, We noticed that you put embedding model on `xpu` and llm on `cpu`. However, Langchain-chatchat is currently not supporting putting embedding model and llm on different devices. Could...