scenic icon indicating copy to clipboard operation
scenic copied to clipboard

Is there a way to convert OWL-VIT model to ONNX?

Open kopyl opened this issue 1 year ago • 10 comments

Want to run it fast on CPU...

kopyl avatar Jun 14 '23 05:06 kopyl

Hi,

Yes there is, you can export OWL-ViT in the HuggingFace format to ONNX.

See here: https://huggingface.co/docs/transformers/serialization

NielsRogge avatar Jun 22 '23 17:06 NielsRogge

@NielsRogge thank you very much :)

kopyl avatar Jun 23 '23 08:06 kopyl

FYI I was able to convert it using:

optimum-cli export onnx --model google/owlvit-base-patch32 --task zero-shot-object-detection output_dir/

Thought I had to install a nightly of PyTorch 2.1 and manually modify the config to allow use of that. Support for OWL-ViT in HF is very new.

adam-harwood avatar Jul 12 '23 06:07 adam-harwood

@adam-harwood thank you very much ❤

kopyl avatar Jul 12 '23 06:07 kopyl

hello, i use your code,get this error: optimum.exporters.error_utils.MinimumVersionError: Unsupported PyTorch version for this model. Minimum required is 2.1, got: 2.1.0.dev20230807+cu121 how to fix this error, thanks

wuqingzhou828 avatar Aug 08 '23 09:08 wuqingzhou828

hello, i use your code,get this error: optimum.exporters.error_utils.MinimumVersionError: Unsupported PyTorch version for this model. Minimum required is 2.1, got: 2.1.0.dev20230807+cu121 how to fix this error, thanks

Given PyTorch 2.1 official isn't out yet, I don't have a nice solution for it. I manually edited the file that was doing the version check and changed it to allow building against the dev version.

adam-harwood avatar Aug 08 '23 10:08 adam-harwood

@adam-harwood can you also provide the script to use the converted onnx model in transformers pipeline?

Aaryanverma avatar Sep 21 '23 18:09 Aaryanverma

@adam-harwood can you also provide the script to use the converted onnx model in transformers pipeline?

I don't use the transformers pipeline, I load and use it using the onnxruntime libraries.

adam-harwood avatar Sep 21 '23 23:09 adam-harwood

Hi Team,

Need your help to convert owl-vit model (OwlViTForObjectDetection) into onnx file.

########################################################################################## from PIL import Image from transformers import OwlViTProcessor, OwlViTForObjectDetection

model_id = "google/owlvit-base-patch16" owlbit8_model = OwlViTForObjectDetection.from_pretrained(model_id,device_map="auto",load_in_8bit=True) owlbit8_model.save_pretrained("local file system - dir path", save_config=True, safe_serialization=True)

#########################################################################################

Output in the local file system - dir path config.json model.safetensors

#########################################################################################

optimum-cli export onnx --model 'local file system - dir path' --task 'zero-shot-object-detection' --framework 'pt' output_dir

########################################################################################### receiving the following error msgs.

##########################################################################################

Using the export variant default. Available variants are:

  • default: The default ONNX variant. Using framework PyTorch: 2.1.0 Traceback (most recent call last): File "/home/..../miniconda3/envs/testenv/bin/optimum-cli", line 8, in sys.exit(main()) File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/optimum/commands/optimum_cli.py", line 163, in main service.run() File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/optimum/commands/export/onnx.py", line 246, in run main_export( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/optimum/exporters/onnx/main.py", line 551, in main_export _, onnx_outputs = export_models( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 753, in export_models export( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 856, in export export_output = export_pytorch( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 573, in export_pytorch onnx_export( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/torch/onnx/utils.py", line 516, in export _export( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1596, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1135, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home..../miniconda3/envs/testenv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1011, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/torch/onnx/utils.py", line 907, in _trace_and_get_graph_from_model orig_state_dict_keys = torch.jit._unique_state_dict(model).keys() File "/home/..../miniconda3/envs/testenv/lib/python3.10/site-packages/torch/jit/_trace.py", line 76, in _unique_state_dict filtered_dict[k] = v.detach() AttributeError: 'str' object has no attribute 'detach'

#########################################################################################

solomonmanuelraj avatar Jan 09 '24 08:01 solomonmanuelraj

Hi,

Could you open an issue on the Optimum library regarding this? They will be happy to help you

NielsRogge avatar Jan 09 '24 09:01 NielsRogge