Joshua Lochner
Joshua Lochner
> fresh windows 10 home bootcamp install on an intel mac I would assume this is the issue, and might have something to do with how `onnxruntime-node` selects which .node...
Hi there 👋 This is indeed an interesting idea! Although it is definitely out-of-scope for this library (as its main purpose is to provide a JS equivalent to the python...
> > Is it within scope to implement a webGPU accelerated version of Whisper? > > > > As I understand, it's simply a matter of changing the Execution provider...
Yes, you are correct. WebGPU would need to be available in your browser, as onnxruntime just uses the api provided by the browser. That said, you might not have to...
Hi there 👋 The reason for this is that we haven't yet added this functionality 😅 (so, less of a bug and more of a feature request). The reason for...
Hi there 👋 Can you share the code you used that produces this error? Also, have you possibly uploaded the models to the HF Hub? If so, I can do...
Thanks for the additional context. Unfortunately, I am unable to reproduce the issue. Which version of transformers.js are you using? Running the following code: ```js import {pipeline} from '@xenova/transformers'; const...
Odd indeed, as neither of those should be an issue. Would you mind putting together a minimal repository/application for me to inspect in closer detail?
Hi there 👋 This is because webpack is trying to bundle the `*.node` files, but this is not necessary. Others have had this issue in the past; see previous issues...
Sure, here's example usage (adapted from the model card, and indeed the logits match the python version): ```js import { BertTokenizer, BertForMaskedLM } from "@xenova/transformers"; const tokenizer = await BertTokenizer.from_pretrained("Xenova/macbert4csc-base-chinese");...