jpmiller25
jpmiller25
I have this same issue, reported here. [https://github.com/LGUG2Z/komorebi/issues/645](https://github.com/LGUG2Z/komorebi/issues/645) As a workaround, I added a link to the autohotkey script to the startup folder to start it independently. Someone suggested making...
@sbonner0 does your processor support AVX2? It's looking to me like my issue is my xeon CPU's without AVX support. There's been discussion and effort on ollama to be able...
I'm using the ipex ollama portable version now, which I think is version 0.5.4. Some people are saying they were able to successfully run with GPU and no AVX support...
https://github.com/ollama/ollama/issues/7622#issuecomment-2524637378 https://github.com/ollama/ollama/issues/2187
Well now I'm kind of doubting it's an AVX issue for me. I'm getting the same sigbus error using ollama-ipex portable and ollama on ipex-llama-cpp. I've also tried both binaries...
It seems like our issues could be related, the sigbus errors look very similar and they happen at exactly the same moment in the model loading process. I'm not sure...
Looking like you are right. This issue solved the sigbus errors by enabling rebar. https://github.com/intel/ipex-llm/issues/10955#issuecomment-2100967354
llama.cpp works when built with vulkan! I didn't try the kovasky blog method, I just cloned and built llama.cpp with vulkan support, and it's working at least in llama-cli
@sbonner0 After some experimentation I'm using https://github.com/kth8/llama-server-vulkan/ with great success, and integrating with open-webui. Theres not nearly as much configuration ability through the UI in case that's important to you,...
I just tested this again, on latest 1.94.1 (app build 137). The problem still exists.