PharahAmari

Results 2 comments of PharahAmari

> I tried this on my Intel integrated GPU. I am able to build and run llama.cpp with Intel GPU support without too much problems following this Tutorial: https://github.com/ggerganov/llama.cpp/blob/master/README-sycl.md >...

> See `time=2025-04-24T11:13:00.708+08:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)"`, I believe you are running the community edition of Ollama. Exactly. But does it matter?