Keith Kacsh

Results 3 comments of Keith Kacsh

No neither through the containers (even after doing updates/upgrades) or building locally from source. Ends up in that same no kernel found. llama.cpp directly from source DOES work though. Figure...

I'll try the llama.cpp container directly. llama.cpp from source did work on the machine with same model and ran fairly well. I stopped trying to get LocalAI to run in...

To follow up, I could never get LocalAI to function with this card correctly even compiling from source. Llama.cpp I can by updating the intel one api kit/drivers and re-compiling....