wluo1007

Results 4 comments of wluo1007

Tried docker, still got error, do you have plans for iGPU support? root@user-Meteor-Lake-Client-Platform:/llm# python vllm_offline_inference.py /usr/local/lib/python3.11/dist-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please...

Hi, Thanks for the reply, The previous environment is currently not available now, so I've installed the recent one to try on both chatglm3-6b and qwen2-7b-instruct, got the same error...

Hi, tried Qwen2-1.5b-Instruct and chatglm3-6b, both worked. Qwen2-7B-Instruct got stucked when loading the model. I've tried Qwen2-7B before using ipex-llm(not with vllm), worked fine. Is the Size limit thing only...

thanks for the quick response, after export BIGDL_IMPORT_IPEX=0, it worked.