transformers.js
transformers.js copied to clipboard
Excessive Memory consumption.
System Info
- M1 Pro 16Gb
- I suppose PHI-3 demo uses version 2.17.1
Environment/Platform
- [X] Website/web-app
- [ ] Browser extension
- [ ] Server-side (e.g., Node.js, Deno, Bun)
- [ ] Desktop app (e.g., Electron)
- [ ] Other (e.g., VSCode extension)
Description
For the latest PHI-3 demo Chrome browser uses: 5.31 Gb for Renderer process and 4.16 Gb for GPU process, totaling almost 10 Gb while running ~2Gb model.
After first inference memory consumption jumps above 12Gb. That can't be normal.
Reproduction
- Open demo page https://huggingface.co/spaces/Xenova/experimental-phi3-webgpu
- Click on
load model - Check on memory consumption.
5.31 Gb for Renderer process
Can you confirm you do not have any other tabs open? I can't seem to understand how this can be related to the application (not much is being rendered).
4.16 Gb for GPU process
This makes sense since it's (most likely) running in fp16 mode.
@xenova can confirm:
- no opened tabs
- no browser extensions
One empty tab opened:
Navigated to https://huggingface.co/spaces/Xenova/experimental-phi3-webgpu and loaded the model:
This makes sense since it's (most likely) running in fp16 mode.
~~Can we make it run in lower precision if we run q_4 quantization?~~ (We can't, ONNX doesn't support 4q yet)
To avoid any confusion this is the downloaded model: /Xenova/Phi-3-mini-4k-instruct_fp16/resolve/main/onnx/model_q4.onnx 838 mb /Xenova/Phi-3-mini-4k-instruct_fp16/resolve/main/onnx/model_q4.onnx_data 1454mb
WebGPU currently only supports 16 and 32bit mode.
same here