Kai Huang
Kai Huang
Hi, we have updated our installation command in our documents, sorry for the confusion. Check this page: https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id3 Try `pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ ` instead :) Feel...
Sorry that we haven't tried Pyinstaller on XPU yet. Will notify you if we have verified this.
Need to modify README for xpu-smi.exe and bat file as well.
Also paste a sample output of the current check result.
Another question is that are the evaluation results in the raft paper Section4 (https://arxiv.org/abs/2403.10131) run using this script? https://github.com/ShishirPatil/gorilla/blob/main/gorilla/eval/eval-scripts/ast_eval_th.py
Hi, this should be issue caused by the version of dependencies, you may try `trll==0.11.0`, which is recommended and tested by us. Similar issue: https://github.com/intel/ipex-llm/issues/13087#issue-3001412276
Synced offline, the error is encountered when running torch operations on xpu. The user will try to run ipex on this machine to further check the environment.
> Do the docs not auto-build? Seems the link above says something different than the [official docs](https://testbigdldocshane.readthedocs.io/en/docs-demo/doc/LLM/Quickstart/ollama_quickstart.html)...maybe when the project name changed that was left running? Or do I have...
In the run_transformers_int4_gpu implementation, 1k input is using 2048.txt as the input file and Baichuan2-13B can generate 98 tokens for this input, which causes OOM for the second trial [if...
OSError: [WinError 127] AFTER RUN "from ipex_llm.transformers import AutoModel,AutoModelForCausalLM"
Hi, seems there's something wrong for your ipex installation. Could you please provide more information: - The command you install `ipex-llm`. You don't need to install ipex manually, it will...