optimum icon indicating copy to clipboard operation
optimum copied to clipboard

Added Onnx configuration for Deformable Detr

Open ashim-mahara opened this issue 2 years ago • 14 comments

Adds Missing Configuration

  • Missing Deformable Detr Support

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [ ] Did you make sure to update the documentation with your changes?
  • [ ] Did you write any new necessary tests?

ashim-mahara avatar Mar 29 '23 02:03 ashim-mahara

Hi @fxmarty, any updates on this or in the original issue?

ashim-mahara avatar Apr 21 '23 07:04 ashim-mahara

Let me check now

fxmarty avatar Apr 21 '23 11:04 fxmarty

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

The culprit is: https://github.com/huggingface/transformers/blob/9fdf158aa0987f6073d2816ad004dc09226350e2/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L695-L707 that uses a custom CUDA kernel. I believe the ONNX export is not able to parse correctly the try/catch. We will need to change transformers code base to allow disabling the CUDA kernel, in order to have the ONNX work (it does work nicely when using the pytorch implementation multi_scale_deformable_attention).

fxmarty avatar Apr 21 '23 12:04 fxmarty

Can you add an entry with deformable-detr here? https://github.com/huggingface/optimum/blob/ccf4b4dbb6d5f4421551ed0d83e0eb07b0261257/tests/exporters/exporters_utils.py#L52 For example with https://huggingface.co/hf-internal-testing/tiny-random-DeformableDetrModel/ . This is for the ONNX export tests.

fxmarty avatar Apr 21 '23 12:04 fxmarty

@fxmarty made the changes you suggested

ashim-mahara avatar Apr 24 '23 03:04 ashim-mahara

Thanks @ashim-mahara . If https://github.com/huggingface/optimum/pull/992 lands in next transformers release we'll be able to merge I believe.

fxmarty avatar Apr 24 '23 06:04 fxmarty

@fxmarty sure. thanks for your work. please do ping me if there's any update on this.

ashim-mahara avatar Apr 24 '23 08:04 ashim-mahara

@ashim-mahara @fxmarty Is there any update on this? Still running into the issue with disable_custom_kernels = True. Thanks for the help

HarperGrieve avatar May 25 '23 17:05 HarperGrieve

@ashim-mahara @fxmarty Is there any update on this? Still running into the issue with disable_custom_kernels = True. Thanks for the help

I wouldn't know. I face issues when loading pre-trained from huggingface hub with the new code. Try defining the model and then loading weights using state_dict.

ashim-mahara avatar May 27 '23 16:05 ashim-mahara

In my case, the export succeeded if you disable custom kernels in the config (before training) and then use this PR's onnx config update (using transformers 4.41.2 and optimum 1.20.0):

Config:

config = DeformableDetrConfig.from_pretrained("SenseTime/deformable-detr", disable_custom_kernels=True,)
model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr", config=config)

Onnx Export:

$ optimum-cli export onnx --model path/to/model --framework pt --task object-detection path/to/onnx

Any chance we could proceed with this PR @fxmarty @ashim-mahara?

jhabr avatar Jun 12 '24 12:06 jhabr

I used transformers 4.41.2 and able to export

import torch from transformers import AutoImageProcessor, DeformableDetrForObjectDetection from PIL import Image import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr",disable_custom_kernels=True,)

inputs = image_processor(images=image, return_tensors="pt")

onnx_path = "deformable_detr.onnx"

print(inputs["pixel_values"].shape)

torch.onnx.export( model, (inputs["pixel_values"],), # Provide the model inputs here onnx_path, input_names=["pixel_values"], output_names=["logits", "bbox_coordinates"], opset_version=16, # dynamic_axes={"pixel_values": {0: "batch_size"}, "logits": {0: "batch_size"}, "bbox_coordinates": {0: "batch_size"}}, ) print(f"Model exported to {onnx_path}")

aleeshajaganath avatar Oct 27 '24 10:10 aleeshajaganath

Hi @echarlaix, let me circle back to you on this later.

ashim-mahara avatar Jun 05 '25 15:06 ashim-mahara

Thanks a lot @ashim-mahara ! If you don't have time to update it I can open a PR tomorrow or next week (all the onnx / onnxruntime optimum integration will likely soon be moved in a new repo making this PR deprecated so would like to merge it beforehand). Let me know what works best for you!

echarlaix avatar Jun 05 '25 15:06 echarlaix

Hi ! thanks for the contribution, please move it to optimum-onnx as we are moving the entire onnx/ort integrations there.

IlyasMoutawwakil avatar Sep 04 '25 15:09 IlyasMoutawwakil