ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

vLLM offline_inference.py failed to run on CPU inference

Open eugeooi opened this issue 1 year ago • 1 comments

Failed to run python offline_inference.py from link for vLLM offline inference on CPU. It seems that llm.py has been removed in the previous version.

eugeooi avatar May 16 '24 10:05 eugeooi

Hi, the vLLM CPU backend is removed for now. The support will be added back later. Sorry for the inconvenience.

gc-fu avatar May 17 '24 06:05 gc-fu