Neo Zhang Jianyu
Neo Zhang Jianyu
@aahouzi Is your issue present with latest code?
@aahouzi Good, above result approves work well. In last week, the bug of set GPU is fixed. Please use latest code. To set the GPU, please refer to the script:...
@fakezeta I guess https://github.com/mudler/LocalAI use very old ggml for SYCL library. This issue has been fixed in llama.cpp. Maybe you could try with llama.cpp on iGPU UHD 770, refer to...
@AppleJunJiang Please use English in title and description. That will help developer understand your issue.
@AlexFierro9 1. Could you check if the oneAPI be enabled correctly? run: `source /opt/intel/oneapi/setvars.sh` 2. If yes, please check mkl is present in log. Like: mkl -- latest 3. If...
@DDXDB Currently, SYCL backend don't support iq2_xxs. Please try with other model file, like https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf If you try llama.cpp for SYCL, please try with the [guide](https://github.com/ggerganov/llama.cpp/blob/master/README-sycl.md) to verify your hardware/software...
@DDXDB Looks like it can't detect the device correctly. Could you try with latest code? And run ./build/bin/ls-sycl-device.exe
@DDXDB It's great to see your result. Yes, support multiple GPUs as default. But it supports more GPUs with same max compute units. If you have iGPU + dGPU, it...
@raj-ritu17 1. Could you share the whole log? from execute cmd. 2. Could you run `pip list` and share the log? Thank you!
@raj-ritu17 Could you provide above info? Thank you!