llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

ggml webgpu: add support for emscripten builds

Open reeselevine opened this issue 1 month ago • 0 comments

This PR builds on and supersedes https://github.com/ggml-org/llama.cpp/pull/15826 from @ngxson.

  • Adds __EMSCRIPTEN__ preprocessors conditionals in places necessary for compilation (this included some OS specific things in common/
  • Adds flags for emscripten builds for 64-bit memory, memory growth that I found to be required to get the backend operations to pass when running through the browser (Chrome/M3 system)
  • Also adds a Github workflow that ensures emscripten builds compile (at least with test-backend-ops), to avoid other code additions that accidentally break things.
  • Disables dawn/native-only WebGPU features when building for the browser, like experimental support for subgroup matrices

reeselevine avatar Nov 12 '25 01:11 reeselevine