transformers.js icon indicating copy to clipboard operation
transformers.js copied to clipboard

Unable to convert my model to ONNX

Open jason-trinidad opened this issue 1 year ago • 11 comments

Describe the bug A clear and concise description of what the bug is.

On my M1 Mac, the convert script throws an unexpected error when I try to use it on my model.

How to reproduce Steps or a minimal working example to reproduce the behavior

I fine-tuned an instance of t5-small, then saved the model and tokenizer to a directory via [model/tokenizer].save_pretrained(). (The model saved as a .h5 file.) I then ran:

python -m [copy of convert script] --quantize --task 'text2text-generation' --model_id [local directory containing saved files]

Expected behavior A clear and concise description of what you expected to happen.

I expected to create a directory with a corrected config file and quantized Transformers.js-compatible model.

Logs/screenshots If applicable, add logs/screenshots to help explain your problem.

Here is the full error and traceback:

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/Users/Jason/Documents/Projects/ai_scheduler/convert_to_onnx.py", line 396, in main() File "/Users/Jason/Documents/Projects/ai_scheduler/convert_to_onnx.py", line 357, in main main_export(**export_kwargs) File "/Users/Jason/Documents/Projects/ai_scheduler/lib/python3.11/site-packages/optimum/exporters/onnx/main.py", line 540, in main_export _, onnx_outputs = export_models( ^^^^^^^^^^^^^^ File "/Users/Jason/Documents/Projects/ai_scheduler/lib/python3.11/site-packages/optimum/exporters/onnx/convert.py", line 753, in export_models export( File "/Users/Jason/Documents/Projects/ai_scheduler/lib/python3.11/site-packages/optimum/exporters/onnx/convert.py", line 879, in export raise RuntimeError( RuntimeError: You either provided a PyTorch model with only TensorFlow installed, or a TensorFlow model with only PyTorch installed.

Environment

  • Transformers.js version: 2.8.0
  • tensorflow-macos version: 2.15.0
  • torch version: 2.1.1
  • python version: 3.11.4
  • Browser (if applicable):
  • Operating system (if applicable): macOS 13.4.1

jason-trinidad avatar Nov 27 '23 21:11 jason-trinidad

Could you try first convert the model to be compatible with pytorch before running the conversion script? Admittedly, I haven't tried converting directly to ONNX from a tensorflow model.

xenova avatar Nov 29 '23 00:11 xenova

Got a chance to try this - I re-trained the model using PyTorch and re-ran the script. Looks like the ONNX export succeeded this time! Thanks for the tip.

I did run into an error with quantization:

""" Quantizing: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/Users/Jason/Documents/Projects/ai_scheduler/convert_to_onnx.py", line 396, in main() File "/Users/Jason/Documents/Projects/ai_scheduler/convert_to_onnx.py", line 372, in main quantize([ File "/Users/Jason/Documents/Projects/ai_scheduler/convert_to_onnx.py", line 230, in quantize quantize_dynamic( TypeError: quantize_dynamic() got an unexpected keyword argument 'optimize_model' """

And another error when I tried to load my model from Huggingface using transformers.js'.

Via AutoModel.from_pretrained('[HF location]', {'quantized': false}):

""" Error: Could not locate file: "https://huggingface.co/jason-trinidad/t5-small-ft-120123-torch/resolve/main/onnx/decoder_model_merged.onnx". at handleError (hub.js:240:11) at getModelFile (hub.js:473:24) at async constructSession (models.js:119:18) at async Promise.all (:5173/index 2) at async T5Model.from_pretrained (models.js:750:20) at async AutoModel.from_pretrained (models.js:3960:20) at async loadModel (List.jsx:41:21) """

Quantization is not crucial for me at the moment. But I still get the issue if I try the script without the quantization flag. Do you know how I resolve this issue with the merged file?

jason-trinidad avatar Dec 01 '23 19:12 jason-trinidad

This looks like a dependency version issue. Can you try with this list of requirements and our conversion script? The combination should work for t5 models.

xenova avatar Dec 01 '23 20:12 xenova

Hi sorry for the delay. Tried this on a new env with the requirements you linked to, I got the same error. It's looking for decoder_model_merged.onnx. Any ideas?

jason-trinidad avatar Dec 11 '23 04:12 jason-trinidad

If you could provide the ID of the model you are looking to convert, I could try it on my side. Are you converting the model with optimum directly or with our conversion script?

xenova avatar Jan 04 '24 23:01 xenova

Thanks, I uploaded it to "jason-trinidad/t5-ft-to-convert" on HF. I was using your conversion script.

jason-trinidad avatar Jan 06 '24 16:01 jason-trinidad

Thanks! I did the conversion myself and I didn't run into any issues. I opened a PR to your repo and ran it with the following code:

import { pipeline } from '@xenova/transformers';

const pipe = await pipeline('text2text-generation', 'jason-trinidad/t5-ft-to-convert', {
    // quantized: false, Uncomment this line to use the unquantized version
    revision: 'refs%2Fpr%2F1',
});
const result = await pipe('Translate to German: Hello', {
    max_new_tokens: 50,
});
console.log(result);
// [{ generated_text: 'Hallo' }]

xenova avatar Jan 07 '24 00:01 xenova

Hi! I had similar issue and I used the conversion script + requirement dependency. In addition, I always got an error of "unsupported model IR version: 9, max supported IR version 8. My ID is shi-zheng-qxhs/gpt2_oasst2_curated

shizheng-rlfresh avatar Feb 08 '24 03:02 shizheng-rlfresh

@shizheng-rlfresh Were you able to fix the "unsupported model IR version: 9, max supported IR version 8"

SmarMykel avatar Apr 30 '24 23:04 SmarMykel

@SmarMykel have you tried the recommended script? try use a venv and run the script...see if that works 😄

python -m venv .venv
# activate .venv and install required packages
source .venv/bin/activate
pip install -r requirements.txt
# run the conversion script - <modelid>
python -m scripts.convert --quantize --model_id <modelid>

shizheng-rlfresh avatar Apr 30 '24 23:04 shizheng-rlfresh

Its working thank you!!! @shizheng-rlfresh

SmarMykel avatar May 01 '24 00:05 SmarMykel