SB

Results 10 comments of SB

@Pavel185 I have the same problem with Italian characters è, é, à, ì etc. I think you can change the character set by adding a custom cleaner, i.e. a cleaner...

@hlomzik you mentioned that using [assignTask + other functions](https://github.com/HumanSignal/label-studio-frontend/issues/1260#issuecomment-1484290488) should allow to solve the issue. Anyway, there is no evidence / example of how to do It. Could someone provide...

Hello @xmcp, Before jumping to any hypotheses about this behavior, could you share the specs of your testing machine? A few months ago, I noticed some issues related to fp16...

So it seems in fact using fp16 internally. I am courius to have a look at the onnx graph (last nodes), are you able to export it after having imported...

Thanks @xenova. > I'm open to hearing if users would benefit from new model exports with fp16 inputs though! No suggestion for this topic since I don't work to much...

> I tried `embed_tokens: "fp16"` without success (no support from the target device), then switched to `embed_tokens: "fp32".` Just a note on this topic, I saw that webgpureport on Android...

Update: I make it work by skip the overlay provided by transformers.js (apart from preprocessing). I directly used onnxruntime-web, even tough the generated tokens have no sense (support to understand...

@xenova I suppose you talk about this: https://github.com/huggingface/transformers.js/blob/c2ab81af062d32ad46e892e7ea5c554ca14117de/src/models.js#L297 I tried to globally set `gpu-buffer` in this way: ```javascript // Load all three models in parallel [this.visionSession, this.embedSession, this.decoderSession] = await...

FYI a working example of SmolVLM, transformers.js and onnx can be found here https://www.gradients.zone/blog/multimodal-llms-at-the-edge/

I'm working on a pull request for nanoVLM to convert to ONNX the various components. After that I plan to make some compatibility tests with HF transformers.