transformers.js
transformers.js copied to clipboard
Does `WEBGPU` Truly Enhance Inference Time Acceleration?
Question
Recently, I've been extensively utilizing transformers.js to load transformer models, and Kudos to the team for this wonderful library ... Specifically, I've been experimenting with version 2.15.0 of transformers.js.
Despite the fact that the model runs on the web-assembly backend
, I've noticed some slowness in inference. In an attempt to address this issue, I experimented with webgpu inference
using the v3
branch. However, the inference time did not meet my expectations.
Is it possible for webgpu to significantly accelerate the inference time?
Used model : Xenova/Whisper-tiny.en , Xenova/all-MiniLM-L6-v2 Model_quantization : quantized_model transformer-js version : 3.0.0-alpha.0 executionProviders : ['webgpu']; Hardware : MacBook M1 Pro with 10-core CPU and 16-core GPU Ram : 16GB
is there any way to accelerate the speed of inference ??
Encoder-decoder models are still a work in progress, but the bert-based embedding models work very well! For example, I get >100x improvement with all-MiniLM-L6-v2.
Try it out yourself: https://huggingface.co/spaces/Xenova/webgpu-embedding-benchmark
Whether nodejs can also benefit from this speedup
Whether nodejs can also benefit from this speedup
I'm not sure if Node.js can benefit from this speedup, but it is possible that Deno can.
Whether nodejs can also benefit from this speedup
onnxruntime-node
not support WebGPU, but it support DirectML (Windows) or CUDA (Linux) (Official prebuilt)
Whether nodejs can also benefit from this speedup
onnxruntime-node
not support WebGPU, but it support DirectML (Windows) or CUDA (Linux)
But there is no device setting (ex: cuda) for transformerjs
@xenova When can I test encoder-decoder model with WebGPU? I can't wait anymore. I am very excited to see that asap.
Just tried it out, and wow, it's a huge upgrade! When are you thinking of launching it?