Alexander Visheratin
Alexander Visheratin
@qjr1997 likely you used the decoder version that returns multiple masks.
Web AI supports image and text models - https://github.com/visheratin/web-ai Plus, it already integrates Hugging Face tokenizers for text models.
Hi. You need to install jimp to use image models: ``` npm install [email protected] ```
Hi! ONNX Runtime for Web doesn't support model training, only inference. If you are interested in training the model in the browser, I recommend looking at [TensorFlow.js](https://www.tensorflow.org/js/guide/train_models). And [here](https://medium.com/tensorflow/text-classification-using-tensorflow-js-an-example-of-detecting-offensive-language-in-browser-e2b94e3565ce) is...
Thank you for the suggestion! I will add this table soon.
Hi! Do you want to use the BERT model for feature extraction? If you share what model you want to run, I can help with the metadata config.
Hi! You came at the right time! I just finished overhauling the project structure to support sub-packages and a unified structure for running in the browser and Node.js. But I'm...
I added the updated code snippets to the [wiki](https://github.com/visheratin/web-ai/wiki).
Hi! Here is an [example](https://colab.research.google.com/drive/1x0_rvMNd3tIunELKuPQDDhLczEBOoy9q?usp=sharing) of exporting an image model from the HF hub. You can adjust it to the different types of models. I use PyTorch 1.13.1, ONNX 1.13.0,...
For T5 models, I use [fastT5](https://github.com/Ki6an/fastT5). ``` from fastT5 import generate_onnx_representation generate_onnx_representation(pretrained_version=, model=, output_path=) ``` Then you can quantize the model: ``` from onnxruntime.quantization import quantize_dynamic, QuantType quantize_dynamic("/path/to/init-decoder.onnx", "/path/to/result/decoder-quant.onnx", weight_type=QuantType.QInt8,...