transformers
transformers copied to clipboard
Can't export Deformable Detr to ONNX
System Info
-
transformers
version: 4.27.2 - Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Tried both, doesn't work
- Using distributed or parallel set-up in script?: No
Who can help?
@amyeroberts @sgugger
Information
- [X] The official example scripts
- [X] My own modified scripts
Tasks
- [X] An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below)
Reproduction
Code to reproduce:
from transformers import DeformableDetrForObjectDetection
import torch
model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr")
example = torch.Tensor(1, 3, 600, 600)
torch.onnx.export(
model,
(example, None),
f="./test-ddetr.onnx",
input_names=['pixel_values'],
output_names=['logits', 'pred_boxes'],
dynamic_axes={"pixel_values": {0: "batch_size", 1: "image_channel", 2: "image_height", 3: "image_width"}},
do_constant_folding=True,
opset_version=16
)
Error:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/torch/onnx/utils.py:581, in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module)
579 _C._jit_pass_inline_fork_wait(graph)
580 _C._jit_pass_lint(graph)
--> 581 _C._jit_pass_onnx_autograd_function_process(graph)
582 _C._jit_pass_lower_all_tuples(graph)
584 # we now record some ops like ones/zeros
585 # into a trace where we previously recorded constants.
586 # use constant prop to maintain our current level of onnx support
587 # without implementing symbolics for all of them
RuntimeError: required keyword attribute 'Subgraph' is undefined
Expected behavior
Should export an onnx model. I can export the Detr model but not Deformable Detr. I have tried it on PyTorch 2.0 too.
I don't know if I should post this in this issue or another one but Deformable Detr is not supported on optimum.ORTModelForObjectDetection.
Also, I tried to create an OnnxConfig (copied from detr source) and export it using transformers.onnx.export
but that resulted in the above error too.
Detr export should be supported: https://github.com/huggingface/optimum/blob/8252f4b0c48183198f4bed54bd6e0822213ef78b/optimum/exporters/tasks.py#L344-L349
Can you try pip install -U optimum transformers
and optimum-cli export onnx --model SenseTime/deformable-detr --task object-segmentation detr_onnx/
?
object-segmentation
is not available as a task. I assumed you mean object-detection
instead and tried the command.
Command:
optimum-cli export onnx --model SenseTime/deformable-detr --task object-detection detr_onnx/
Error:
KeyError: "deformable-detr is not supported yet. Only {'speech-to-text', 'hubert', 'mobilenet-v1', 'xlm', 'blenderbot', 'camembert', 'mobilenet-v2', 'wav2vec2-conformer', 'donut-swin', 'xlm-roberta', 'marian', 'electra', 'm2m-100', 'mbart', 'perceiver', 'whisper', 'swin', 'bert', 'poolformer', 'audio-spectrogram-transformer', 'unispeech', 'gpt-neo', 'levit', 'layoutlmv3', 'segformer', 'codegen', 'deit', 'mpnet', 'vit', 'roberta', 'deberta-v2', 'mt5', 'wavlm', 'data2vec-vision', 'data2vec-text', 'flaubert', 'blenderbot-small', 'vision-encoder-decoder', 'nystromformer', 'sew-d', 'yolos', 'gpt-neox', 'detr', 'gpt2', 'layoutlm', 'mobilevit', 't5', 'splinter', 'roformer', 'bloom', 'convnext', 'resnet', 'convbert', 'mobilebert', 'distilbert', 'squeezebert', 'unispeech-sat', 'gptj', 'clip', 'wav2vec2', 'groupvit', 'sew', 'deberta', 'beit', 'pegasus', 'longt5', 'ibert', 'albert', 'bart', 'data2vec-audio'} are supported. If you want to support deformable-detr please propose a PR or open up an issue."
Thank you @ashim-mahara , apologies indeed this is not supported currently - was confused by deformable_detr / detr.
Would you like to submit a PR to add the support in the export?
This would entail (among others):
- Adding a relevant config in https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py (with defined inputs/outputs and inputs generators)
- Adding
deformable_detr
in tasks.py: https://github.com/huggingface/optimum/blob/4bbcc1b1d077e9258649f39b752370ff70163c00/optimum/exporters/tasks.py#L388
Okay I'll try and status update here in ~3 days.
@fxmarty I Still had the same error when I added the configs and checked if it will then import the model with:
ORTModel.from_pretrained("../savedModels/deformable-detr/", from_transformers= True)
.
Error:
581 _C._jit_pass_inline_fork_wait(graph)
582 _C._jit_pass_lint(graph)
--> 583 _C._jit_pass_onnx_autograd_function_process(graph)
584 _C._jit_pass_lower_all_tuples(graph)
586 # we now record some ops like ones/zeros
587 # into a trace where we previously recorded constants.
588 # use constant prop to maintain our current level of onnx support
589 # without implementing symbolics for all of them
RuntimeError: required keyword attribute 'Subgraph' is undefined
I am not an expert on this but I think the path tracing is failing. So probably will need the model author to give it a look.
@ashim-mahara Could you open a PR in optimum so that I can have a look?
@fxmarty here is the PR: https://github.com/huggingface/optimum/pull/931
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
@fxmarty is there any way to make the pretrained deformable-detr models compatible with the new code? I tried exporting SenseTime/deformable-detr
after changing the disable_custom_kernels
to True
but it still throws an error.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.