bitterspeed
bitterspeed
@beshkenadze I've tried building with `npx node-llama-cpp build --gpu false`, calling it with `const llama = await getLlama('lastBuild');` , but it still crashes upon model load. I'm running Arm64 (m1...
Edit: found [this issue in llama.cpp](https://github.com/ggerganov/llama.cpp/issues/7130): Hello, while I confirm this fixes Mac Os BGE models, this causes a crash on Windows. Running the test code above with `bge-large-en-v1.5-q4_k_m.gguf` causes...
On Mac (mac-arm64-metal): 3.0 beta 18 + Electron (with Electron-forge + vite) + BGE models run on Electron development (`npm run start`) but there is a failure with no error...
@giladgd Thanks. I have run that command, and while that above Vulkan error does not show up anymore, there is now a crash on runtime (with no error message) when...
Amazing. Thank you for the guidance, works perfectly!