Joshua Lochner
Joshua Lochner
Sure, we use [Optimum](https://github.com/huggingface/optimum?tab=readme-ov-file#onnx--onnx-runtime), and you can find additional information there. You can also use our helper [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py) for helping with quantization.
Thanks! 🤗 Would you mind benchmarking/comparing your code with https://www.npmjs.com/package/audiobuffer-to-wav, which I used in a [demo](https://github.com/xenova/transformers.js/blob/8804c36591d11d8456788d1bb4b16489121b3be2/examples/text-to-speech-client/src/utils.js) a few months ago. Also, at the moment, we only support 1-channel audios, but...
Thanks again! Just letting you know this PR is marked for the next release :)
Hi there 👋 I definitely think the addition of an equivalent `TextStreamer` class to the library will be great! If someone in the community would like to contribute this, it...
I think I should be able to just remote that import (since we don't support node < 18 anymore).
https://github.com/xenova/transformers.js/pull/752 should fix this.
@customautosys The default vite settings should be able to ignore these by looking at the package.json's browser field. Could you provide more information about your environment? See here for an...
Done in https://github.com/xenova/transformers.js/pull/772
Hi there 👋 Thanks for the report! Luckily, we already support the [ByteLevel](https://github.com/xenova/transformers.js/blob/8bb8c5a33c39aaf33eca286c0c271ed60a94e0da/src/tokenizers.js#L1726) and [TemplateProcessing](https://github.com/xenova/transformers.js/blob/8bb8c5a33c39aaf33eca286c0c271ed60a94e0da/src/tokenizers.js#L1674) post-processors, so the only thing needed is to implement the Sequence post-processor. Similarly, we already...
No worries! It's super simple, so I'll add it soon. Thanks again for reporting!