transformers icon indicating copy to clipboard operation
transformers copied to clipboard

ONNXConfig: Add a configuration for all available models

Open chainyo opened this issue 2 years ago โ€ข 58 comments

This issue is about the working group specially created for this task. If you are interested in helping out, take a look at this organization, or add me on Discord: ChainYo#3610

We want to contribute to HuggingFace's ONNX implementation for all available models on HF's hub. There are already a lot of architectures implemented for converting PyTorch models to ONNX, but we need more! We need them all!

Feel free to join us in this adventure! Join the org by clicking here

Here is a non-exhaustive list of models that all models available:

  • [x] Albert
  • [x] BART
  • [x] BeiT
  • [x] BERT
  • [x] BigBird
  • [x] BigBirdPegasus
  • [x] Blenderbot
  • [x] BlenderbotSmall
  • [x] BLOOM
  • [x] CamemBERT
  • [ ] CANINE
  • [ ] CLIP
  • [ ] CodeGen
  • [x] ConvNext
  • [x] ConvBert
  • [ ] CTRL
  • [ ] CvT
  • [x] Data2VecText
  • [x] Data2VecVision
  • [x] Deberta
  • [x] DebertaV2
  • [x] DeiT
  • [ ] DecisionTransformer
  • [x] DETR
  • [x] Distilbert
  • [ ] DPR
  • [ ] DPT
  • [x] ELECTRA
  • [ ] FNet
  • [ ] FSMT
  • [x] Flaubert
  • [ ] FLAVA
  • [ ] Funnel Transformer
  • [ ] GLPN
  • [x] GPT2
  • [x] GPTJ
  • [x] GPT-Neo
  • [ ] GPT-NeoX
  • [ ] Hubert
  • [x] I-Bert
  • [ ] ImageGPT
  • [ ] LED
  • [x] LayoutLM
  • [ ] ๐Ÿ› ๏ธ LayoutLMv2
  • [x] LayoutLMv3
  • [ ] LayoutXLM
  • [ ] LED
  • [x] LeViT
  • [ ] Longformer
  • [x] LongT5
  • [ ] ๐Ÿ› ๏ธ Luke
  • [ ] Lxmert
  • [x] M2M100
  • [ ] MaskFormer
  • [x] mBart
  • [ ] MCTCT
  • [ ] MPNet
  • [x] MT5
  • [x] MarianMT
  • [ ] MegatronBert
  • [x] MobileBert
  • [x] MobileViT
  • [ ] Nystrรถmformer
  • [x] OpenAIGPT-2
  • [ ] ๐Ÿ› ๏ธ OPT
  • [x] PLBart
  • [ ] Pegasus
  • [x] Perceiver
  • [ ] PoolFormer
  • [ ] ProphetNet
  • [ ] QDQBERT
  • [ ] RAG
  • [ ] REALM
  • [ ] ๐Ÿ› ๏ธ Reformer
  • [ ] RemBert
  • [x] ResNet
  • [ ] RegNet
  • [ ] RetriBert
  • [x] RoFormer
  • [x] RoBERTa
  • [ ] SEW
  • [ ] SEW-D
  • [ ] SegFormer
  • [ ] Speech2Text
  • [ ] Speech2Text2
  • [ ] Splinter
  • [x] SqueezeBERT
  • [ ] Swin Transformer
  • [x] T5
  • [ ] TAPAS
  • [ ] TAPEX
  • [ ] Transformer XL
  • [ ] TrOCR
  • [ ] UniSpeech
  • [ ] UniSpeech-SAT
  • [ ] VAN
  • [x] ViT
  • [ ] Vilt
  • [ ] VisualBERT
  • [ ] Wav2Vec2
  • [ ] WavLM
  • [ ] XGLM
  • [x] XLM
  • [ ] XLMProphetNet
  • [x] XLM-RoBERTa
  • [x] XLM-RoBERTa-XL
  • [ ] ๐Ÿ› ๏ธ XLNet
  • [x] YOLOS
  • [ ] Yoso

๐Ÿ› ๏ธ next to a model suggests that the PR is in progress. If there is nothing next to a model, it means that ONNX does not yet support the model, and thus we need to add support for it.

If you need help implementing an unsupported model, here is a guide from HuggingFace's documentation.

If you want an example of implementation, I did one for CamemBERT months ago.

chainyo avatar Mar 21 '22 18:03 chainyo

  • GPT-J: #16274
  • FlauBERT: #16279

chainyo avatar Mar 21 '22 18:03 chainyo

  • LayoutLMv2: #16309

chainyo avatar Mar 21 '22 19:03 chainyo

Let me try with BigBird

vumichien avatar Mar 22 '22 06:03 vumichien

  • Bigbird #16427

vumichien avatar Mar 26 '22 16:03 vumichien

Love the initiative here, thanks for opening an issue! Added the Good First Issue label so that it's more visible :)

LysandreJik avatar Mar 28 '22 09:03 LysandreJik

Love the initiative here, thanks for opening an issue! Added the Good First Issue label so that it's more visible :)

Thanks for the label. I don't know if it's easy to begin, but it's cool if more people see this and can contribute!

chainyo avatar Mar 29 '22 12:03 chainyo

I would like to try with Luke. However, Luke doesn't support any features apart from default AutoModel. It's main feature is LukeForEntityPairClassification for relation extraction. Should I convert luke-base to Onnx or LukeForEntityPairClassification which has a classifier head?

aakashb95 avatar Mar 31 '22 07:03 aakashb95

Data2vecAudio doesn't have ONNXConfig yet. I write its ONNXConfig according to Data2VecTextOnnxConfig but it throws error. Can anyone help me?

from typing import Mapping, OrderedDict
from transformers.onnx import OnnxConfig

from transformers import AutoConfig
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel

class Data2VecAudioOnnxConfig(OnnxConfig):
    @property
    def inputs(self):
        return OrderedDict(
            [
                ("input_values", {0: "batch", 1: "sequence"}),
                ("attention_mask", {0: "batch", 1: "sequence"}),
            ]
        )
    
config = AutoConfig.from_pretrained("facebook/data2vec-audio-base-960h")
onnx_config = Data2VecAudioOnnxConfig(config)



onnx_path = Path("facebook/data2vec-audio-base-960h")
model_ckpt = "facebook/data2vec-audio-base-960h"
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)

onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)

errors

ValueError                                Traceback (most recent call last)
/var/folders/2t/0w65vdjs2m32w5mmzzgtqrhw0000gn/T/ipykernel_59977/667985886.py in <module>
     27 tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
     28 
---> 29 onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)

~/miniconda3/lib/python3.9/site-packages/transformers/onnx/convert.py in export(tokenizer, model, config, opset, output)
    255 
    256     if is_torch_available() and issubclass(type(model), PreTrainedModel):
--> 257         return export_pytorch(tokenizer, model, config, opset, output)
    258     elif is_tf_available() and issubclass(type(model), TFPreTrainedModel):
    259         return export_tensorflow(tokenizer, model, config, opset, output)

~/miniconda3/lib/python3.9/site-packages/transformers/onnx/convert.py in export_pytorch(tokenizer, model, config, opset, output)
    112 
    113             if not inputs_match:
--> 114                 raise ValueError("Model and config inputs doesn't match")
    115 
    116             config.patch_ops()

ValueError: Model and config inputs doesn't match

xiadingZ avatar Mar 31 '22 12:03 xiadingZ

I would like to try with Luke. However, Luke doesn't support any features apart from default AutoModel. It's main feature is LukeForEntityPairClassification for relation extraction. Should I convert luke-base to Onnx or LukeForEntityPairClassification which has a classifier head?

When you implement the ONNX Config for a model it's working for all kind of task, because the base model and the ones pre-packaged for fine-tuning have the same inputs.

So you can base your implementation on the base model and other tasks will work too.

chainyo avatar Apr 01 '22 17:04 chainyo

  • LUKE #16562 from @aakashb95 :+1:

chainyo avatar Apr 02 '22 17:04 chainyo

Still learning

jorabara avatar Apr 11 '22 16:04 jorabara

Issue description

Hello, thank you for supporting GPTJ with ONNX. But when I exported an ONNX checkpoint using transformers-4.18.0, I got the issue like below.

(venv) root@V100:~# python -m transformers.onnx --model=gpt-j-6B/ onnx/
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/data/lvenv/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 99, in <module>
    main()
  File "/data/venv/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 62, in main
    raise ValueError(f"Unsupported model type: {config.model_type}")
ValueError: Unsupported model type: gptj

I found GPTJ with ONNX seems supported when I checked your document transformers-4.18.0 [https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture] and code [src/transformers/onnx/features.py etc.]. But I still got this issue. And then, I checked the parameter of config.model_type in File "/data/venv/lib/python3.8/site-packages/transformers/onnx/main.py", which is related to two parameters [from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES, from ..models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES]. I did not find GPTJ's config in these configs. It seems not sensible.

Environment info

  • Platform: Ubuntu 20.04.2
  • python: 3.8.10
  • PyTorch: 1.10.0+cu113
  • transformers: 4.18.0
  • GPU: V100

pikaqqqqqq avatar Apr 14 '22 08:04 pikaqqqqqq

Hello, thank you for supporting GPTJ with ONNX. But when I exported an ONNX checkpoint using transformers-4.18.0, I got the issue like below.

Hello @pikaqqqqqq, thanks for reporting the problem. I opened a PR with a quick fix to avoid this problem, check #16780

chainyo avatar Apr 14 '22 15:04 chainyo

  • ConvBERT: #16859

chainyo avatar Apr 20 '22 17:04 chainyo

Hello ๐Ÿ‘‹๐Ÿฝ, I added RoFormer onnx config here #16861, I'm not 100% sure who to ask for review so I'm posting this here. Thanks ๐Ÿ™๐Ÿฝ

skrsna avatar Apr 20 '22 18:04 skrsna

Hi! I would like try building the ONNX config for Reformer.

Tanmay06 avatar Apr 21 '22 13:04 Tanmay06

Hi! I would like try building the ONNX config for Reformer.

Hi @Tanmay06 that would be awesome. Don't hesitate to open a PR with your work when you feel it's quite good. You can ping me anytime if you need help!

chainyo avatar Apr 21 '22 17:04 chainyo

Hello! I would like to work on ONNX config for ResNet.

chamidullinr avatar Apr 26 '22 09:04 chamidullinr

Hello! I would like to work on ONNX config for ResNet.

Nice, don't hesitate to ping me if help is needed :hugs:

chainyo avatar Apr 26 '22 11:04 chainyo

Hi! I would like to work on ONNX config for BigBirdPegasus.

nandwalritik avatar Apr 28 '22 10:04 nandwalritik

Hi! I would like to work on ONNX config for BigBirdPegasus.

Hi, nice! If you need help you can tag me.

chainyo avatar Apr 28 '22 11:04 chainyo

#17027 Here is one for XLNet!

sijunhe avatar Apr 30 '22 14:04 sijunhe

#17029 PR for MobileBert.

manandey avatar May 01 '22 12:05 manandey

#17030 Here is the PR for XLM

nandwalritik avatar May 01 '22 16:05 nandwalritik

#17078 PR for BigBirdPegasus

nandwalritik avatar May 04 '22 05:05 nandwalritik

#17213 for Perceiver #17176 for Longformer (work in progress ๐Ÿšง help appreciated)

deutschmn avatar May 12 '22 13:05 deutschmn

Hi @ChainYo , I would like to work on getting the ONNX Config for SqueezeBert . Thanks!

artemisep avatar May 18 '22 01:05 artemisep

@ChainYo I would like to get started on the ONNX Config for DeBERTaV2!

sam-h-bean avatar Jun 07 '22 15:06 sam-h-bean

@ChainYo, I would like to get started on the ONNX Config for DeBERTaV2!

Ok noted! Ping me on the PR you open if you need help. ๐Ÿค—

chainyo avatar Jun 07 '22 16:06 chainyo

@ChainYo ResNet and ConvNeXT are now supported, see #17585 and #17627

regisss avatar Jun 09 '22 13:06 regisss

@ChainYo ResNet and ConvNeXT are now supported, see #17585 and #17627

Super cool! I update the list then.

chainyo avatar Jun 09 '22 14:06 chainyo

Updated the list to leverage Github tasks instead. You can now see that 33 of the 93 models are supported. However, there are some new models to be added to the list which were added in v4.20

NielsRogge avatar Jun 18 '22 08:06 NielsRogge

Updated the list to leverage Github tasks instead. You can now see that 33 of the 93 models are supported. However, there are some new models to be added to the list which were added in v4.20

Thanks! I will update the list very soon then.

EDIT: I Added 13 new models to the list! ๐ŸŽ‰

chainyo avatar Jun 18 '22 08:06 chainyo

@ChainYo OPT-30B here https://github.com/huggingface/transformers/pull/17771

0xrushi avatar Jun 19 '22 05:06 0xrushi

@ChainYo Support for DETR has been added in #17904

regisss avatar Jun 28 '22 17:06 regisss

@ChainYo Support for LayoutLMv3 has been added in #17953

regisss avatar Jul 01 '22 07:07 regisss

@ChainYo Support for LayoutLMv3 has been added in #17953

So cool, thanks a lot! @regisss

chainyo avatar Jul 01 '22 08:07 chainyo

@ChainYo I have added LeViT support here https://github.com/huggingface/transformers/pull/18154

gcheron avatar Jul 15 '22 19:07 gcheron

Actually I was using Hugging Face for a while and wanted to contribute, 16308 I was trying Swin Transformer one, following . .. replicating ViT one: here , running tests I got,

FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'

the same was case in ViT implementation, the test gave same errors. also some DETR issue was also present for both ViT and swin Transformer (some related to from _lzma import *)

bibhabasumohapatra avatar Jul 17 '22 05:07 bibhabasumohapatra

Actually I was using Hugging Face for a while and wanted to contribute, 16308 I was trying Swin Transformer one, following . .. replicating ViT one: here , running tests I got,

FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'

the same was case in ViT implementation, the test gave same errors. also some DETR issue was also present for both ViT and swin Transformer (some related to from _lzma import *)

@bibhabasumohapatra you have to install ONNX runtime:

pip install '.[onnxruntime]'

gcheron avatar Jul 17 '22 12:07 gcheron

Hello @ChainYo I would like to contribute the Onnx config for the Decision Transformer, I'd love some guidance on how to go about this as this is my first contribution, really appreciate your guidance in getting me off the ground.

skanjila avatar Jul 17 '22 15:07 skanjila

Hello @ChainYo I would like to contribute the Onnx config for the Decision Transformer, I'd love some guidance on how to go about this as this is my first contribution, really appreciate your guidance in getting me off the ground.

@skanjila You can check how to do it here, it is well described: https://huggingface.co/docs/transformers/v4.20.1/en/serialization#exporting-a-model-for-an-unsupported-architecture Do not hesitate to open a PR and we will support you if needed :)

regisss avatar Jul 18 '22 10:07 regisss

Hello @ChainYo I would like to contribute the Onnx config for the Decision Transformer, I'd love some guidance on how to go about this as this is my first contribution, really appreciate your guidance in getting me off the ground.

@skanjila You can check how to do it here, it is well described: https://huggingface.co/docs/transformers/v4.20.1/en/serialization#exporting-a-model-for-an-unsupported-architecture Do not hesitate to open a PR, and we will support you if needed :)

Thanks @regisss, for the appropriate link!

There are also multiple merged PR that adds ONNX Config that can give you an idea of all files you need to update to make it works. Look at: #16274, #16279, #16427, #17213 or #18154

Sometimes there are some edge cases where you have to implement more things (like LayoutLMv3), but we could help you through the PR when you start implementing it, as @regiss said!

chainyo avatar Jul 18 '22 10:07 chainyo

@ChainYo will get started on this, thank you so much for your help, expect a PR coming soon

skanjila avatar Jul 18 '22 22:07 skanjila

Hi :hugs:

I am facing this error while trying to load up my DeBERTaV3 model into a ONNX format.

2022-07-21 10:21:22.414296: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Some weights of the model checkpoint at deberta-v3-large-conll-doccano/ were not used when initializing DebertaV2Model: ['classifier.bias', 'classifier.weight']
- This IS expected if you are initializing DebertaV2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaV2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
  File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module>
    main()
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 76, in main
    model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature)
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 519, in check_supported_model_or_raise
    model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 422, in get_supported_features_for_model_type
    f"{model_type_and_model_name} is not supported yet. "
KeyError: "deberta-v2 is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'camembert', 'convbert', 'convnext', 'data2vec-text', 'deit', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'longt5', 'marian', 'mbart', 'mobilebert', 'm2m-100', 'perceiver', 'resnet', 'roberta', 'roformer', 'squeezebert', 't5', 'vit', 'xlm', 'xlm-roberta'] are supported. If you want to support deberta-v2 please propose a PR or open up an issue."

Though, @ChainYo in your original post you mention DeBERTaV2 should be available. Based on this error, I feel like there might be a bug in this.

Could someone help me sort this out or tell me how to get past this?

(if this is the wrong place to ask this, I have a post on the HF Community ref link)

thanks

Bhavnick-Yali avatar Jul 21 '22 10:07 Bhavnick-Yali

Hi hugs

I am facing this error while trying to load up my DeBERTaV3 model into a ONNX format.

2022-07-21 10:21:22.414296: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Some weights of the model checkpoint at deberta-v3-large-conll-doccano/ were not used when initializing DebertaV2Model: ['classifier.bias', 'classifier.weight']
- This IS expected if you are initializing DebertaV2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaV2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
  File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module>
    main()
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 76, in main
    model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature)
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 519, in check_supported_model_or_raise
    model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)
  File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 422, in get_supported_features_for_model_type
    f"{model_type_and_model_name} is not supported yet. "
KeyError: "deberta-v2 is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'camembert', 'convbert', 'convnext', 'data2vec-text', 'deit', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'longt5', 'marian', 'mbart', 'mobilebert', 'm2m-100', 'perceiver', 'resnet', 'roberta', 'roformer', 'squeezebert', 't5', 'vit', 'xlm', 'xlm-roberta'] are supported. If you want to support deberta-v2 please propose a PR or open up an issue."

Though, @ChainYo, in your original post, you mention DeBERTaV2 should be available. Based on this error, I feel like there might be a bug in this.

Could someone help me sort this out or tell me how to get past this?

(if this is the wrong place to ask this, I have a post on the HF Community ref link)

thanks

Hey @Bhavnick-Yali, can you tell me which version of Transformers you are using? Because I see Deberta on available models in the source code. Check it here: https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/features.py#L240

chainyo avatar Jul 21 '22 11:07 chainyo

Hi @ChainYo! :hugs:

I am working on Colab and using the !pip install transformers[onnx] command so I believe it's the latest version.

Yes, that's the issue, I see it as well but it gives me the error while trying to run it. So I tried something else as well after this ref: link where I copied the onnxconfig class of DeBERTaV2 but even that was resulting in a unreadable error.

Bhavnick-Yali avatar Jul 22 '22 04:07 Bhavnick-Yali

I am working on Colab and using the !pip install transformers[onnx] command, so I believe it's the latest version.

Could you verify it precisely, please? To be sure, it's not a Colab problem with an older version.

import transformers
print(transformers.__version__)

Could you also try to install transformers from the main GitHub branch?

$ pip install git+https://github.com/huggingface/transformers.git@main

chainyo avatar Jul 22 '22 08:07 chainyo

@ChainYo I'd like to work on LongT5.

Edit : Taking up Pegasus instead since there already seems to be an implementation for LongT5 :-D

pramodith avatar Jul 24 '22 21:07 pramodith

Hi @ChainYo! :hugs:

Colab version says 4.20.1 which was 22 June Release and should be having the DeBERTaV2 config image It isn't working in this version as I tried earlier.

Using the main GitHub branch it installs 4.21.0.dev0 version, from which the ONNX conversion works. Not sure what the issue is.

Thanks!

Bhavnick-Yali avatar Jul 25 '22 05:07 Bhavnick-Yali

Colab version says 4.20.1, which was the 22 June Release and should have the DeBERTaV2 config !

Are you sure about this?

Using the main GitHub branch, it installs 4.21.0.dev0 version, from which the ONNX conversion works. Not sure what the issue is.

I'm glad it solved your problem! :fireworks:

chainyo avatar Jul 25 '22 08:07 chainyo

@ChainYo would love to take up CLIP if there's no one working on it yet?

unography avatar Aug 04 '22 16:08 unography

@ChainYo I'd like to take up VisualBERT if no one is working on it yet?

shivalikasingh95 avatar Aug 05 '22 22:08 shivalikasingh95

Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-

Validating ONNX model...
Traceback (most recent call last):
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module>
    main()
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main
    validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs
    session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'

This is supposedly solved in the original repo by: https://github.com/openai/CLIP/pull/219 Does that change need to be included inside transformers as well?

unography avatar Aug 06 '22 16:08 unography

Does that change need to be included inside transformers as well?

Yes, modeling files are often updated to work with ONNX or torch.fx for instance (as long as the changes are minimal).

NielsRogge avatar Aug 07 '22 09:08 NielsRogge

Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-

Validating ONNX model...
Traceback (most recent call last):
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module>
    main()
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main
    validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs
    session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'

This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?

Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.

chainyo avatar Aug 08 '22 07:08 chainyo

Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-

Validating ONNX model...
Traceback (most recent call last):
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module>
    main()
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main
    validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs
    session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'

This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?

Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.

Sure, I"ll open the PR, happy to work on it

unography avatar Aug 08 '22 08:08 unography

Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-

Validating ONNX model...
Traceback (most recent call last):
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module>
    main()
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main
    validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
  File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs
    session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'

This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?

Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.

Added the PR here: https://github.com/huggingface/transformers/pull/18515

unography avatar Aug 08 '22 08:08 unography

added PR for OWLViT : https://github.com/huggingface/transformers/pull/18588

unography avatar Aug 11 '22 19:08 unography

Hi!, just wondering when are all this new configs going to be included? Wich release! Great work, will try to add one or two myself

irg1008 avatar Aug 31 '22 22:08 irg1008

Hi!, just wondering when are all this new configs going to be included? Wich release! Great work, will try to add one or two myself

Hey @irg1008, it's integrated continuously with each transformers release. If you are looking for a model that is not available in the last version, you can still install the package with the main branch:

pip install git+https://github.com/huggingface/transformers.git

chainyo avatar Sep 02 '22 07:09 chainyo