flatsiedatsie
flatsiedatsie
> Using the HTML provided in the README results in a error (see #87) > > ``` > > > import { ink } from 'https://esm.sh/[email protected]'; > ink(document.getElementById('editor')!); > >...
Wonderful, thank you! Would it perhaps be possible to provide a single-file version of the library? For privacy protection reasons I can't use a CDN, the library files must be...
@davidmyersdev Thanks > It'd be better to install it via NPM and bundle it with something like Vite if that's possible. Unfortunately that's not possible. I'm trying to integrate it...
Llama 3.2 1B seems to be supported, as there seems to be a [demo available](https://webml-community-llama-3-2-webgpu.static.hf.space/index.html). The source code is [missing though](https://github.com/huggingface/transformers.js-examples/issues/7). A good starting point might be the example code...
> perhaps the @transformers pipeline isnt supported yet but https://github.com/huggingface is? Do you mean version 2 and version 3 of Transformers.js? if so, you need to use Transformers.js V3.
V3 is much MUCH faster because it adds support for WebGPU. Read through the documentation: https://huggingface.co/docs/transformers.js/index And there are tons of examples (linked above). Always search in these issues too,...
The tokenizer is supported, but from a quick search in the source code it seems it's not itself supported. You can try WebLLM or Wllama to run that model? That's...
I'd really suggest Phi 3, which Transformers.js can run with GPU acceleration. > I'm looking for a model like (GPT or Claude) Curb your expectations. You might also want to...
You might want to take a look at this: https://huggingface.co/spaces/webml-community/llama-3.2-webgpu
it's not different.