llama runner process has terminated exit status 1
用ollama run qwen2:7b
还是用curl模式,都返回错误llama runner process has terminated exit status 1
复现步骤如下 conda create -n llm-cpp python=3.11 conda activate llm-cpp pip install --pre --upgrade ipex-llm[cpp] mkdir llama-cpp cd llama-cpp init-llama-cpp.bat init-ollama.bat set OLLAMA_NUM_GPU=999 set no_proxy=localhost,127.0.0.1 set ZES_ENABLE_SYSMAN=1 set SYCL_CACHE_PERSISTENT=1
ollama serve
用管理员权限运行的 然后另开一个也是管理员权限,ollama run qwen2:7b,就出现了 cpu如上面截图,这款cpu的i7 ,i5 这2个笔记本都不行
重新安装后错误又变了
hi @dayskk , could you please run the ENV-Check script in https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/scripts and reply us with the results?
After upgrade MTL iGPU driver to 5762 and install OneAPI 2024.2, it works.
## open miniforge prompt as administrator
conda activate llm2
cd C:\Program Files (x86)\Intel\oneAPI
setvars.bat
cd C:\Users\Admin\Documents\ollama2
init-ollama.bat
set OLLAMA_NUM_GPU=999
set no_proxy=localhost,127.0.0.1
set ZES_ENABLE_SYSMAN=1
ollama serve
## open another miniforge prompt as administrator
conda activate llm2
cd C:\Program Files (x86)\Intel\oneAPI
setvars.bat
cd C:\Users\Admin\Documents\ollama2
ollama pull qwen2:7b
curl http://localhost:11434/api/generate -d "{\"model\": \"qwen2:7b\",\"prompt\": \"Why is the sky blue?\",\"stream\": false}"
安装oneapi 打开一个cmd窗口,用pip安装,这样安装的包约1.4G,比原来节省了20G+。 mkdir intel/dpct set PYTHONUSERBASE=c:/intel/dpct pip install --user mkl-dpcpp==2024.2 onednn==2024.2 dpcpp-cpp-rt==2024.2
然后到conda环境的窗口 conda activate your_env set PATH=c:\intel\dpct\Library\bin;%PATH%
此方法无效