transformers
transformers copied to clipboard
ONNXConfig: Add a configuration for all available models
This issue is about the working group specially created for this task. If you are interested in helping out, take a look at this organization, or add me on Discord: ChainYo#3610
We want to contribute to HuggingFace's ONNX implementation for all available models on HF's hub. There are already a lot of architectures implemented for converting PyTorch models to ONNX, but we need more! We need them all!
Feel free to join us in this adventure! Join the org by clicking here
Here is a non-exhaustive list of models that all models available:
- [x] Albert
- [x] BART
- [x] BeiT
- [x] BERT
- [x] BigBird
- [x] BigBirdPegasus
- [x] Blenderbot
- [x] BlenderbotSmall
- [x] BLOOM
- [x] CamemBERT
- [ ] CANINE
- [ ] CLIP
- [ ] CodeGen
- [x] ConvNext
- [x] ConvBert
- [ ] CTRL
- [ ] CvT
- [x] Data2VecText
- [x] Data2VecVision
- [x] Deberta
- [x] DebertaV2
- [x] DeiT
- [ ] DecisionTransformer
- [x] DETR
- [x] Distilbert
- [ ] DPR
- [ ] DPT
- [x] ELECTRA
- [ ] FNet
- [ ] FSMT
- [x] Flaubert
- [ ] FLAVA
- [ ] Funnel Transformer
- [ ] GLPN
- [x] GPT2
- [x] GPTJ
- [x] GPT-Neo
- [ ] GPT-NeoX
- [ ] Hubert
- [x] I-Bert
- [ ] ImageGPT
- [ ] LED
- [x] LayoutLM
- [ ] ๐ ๏ธ LayoutLMv2
- [x] LayoutLMv3
- [ ] LayoutXLM
- [ ] LED
- [x] LeViT
- [ ] Longformer
- [x] LongT5
- [ ] ๐ ๏ธ Luke
- [ ] Lxmert
- [x] M2M100
- [ ] MaskFormer
- [x] mBart
- [ ] MCTCT
- [ ] MPNet
- [x] MT5
- [x] MarianMT
- [ ] MegatronBert
- [x] MobileBert
- [x] MobileViT
- [ ] Nystrรถmformer
- [x] OpenAIGPT-2
- [ ] ๐ ๏ธ OPT
- [x] PLBart
- [ ] Pegasus
- [x] Perceiver
- [ ] PoolFormer
- [ ] ProphetNet
- [ ] QDQBERT
- [ ] RAG
- [ ] REALM
- [ ] ๐ ๏ธ Reformer
- [ ] RemBert
- [x] ResNet
- [ ] RegNet
- [ ] RetriBert
- [x] RoFormer
- [x] RoBERTa
- [ ] SEW
- [ ] SEW-D
- [ ] SegFormer
- [ ] Speech2Text
- [ ] Speech2Text2
- [ ] Splinter
- [x] SqueezeBERT
- [ ] Swin Transformer
- [x] T5
- [ ] TAPAS
- [ ] TAPEX
- [ ] Transformer XL
- [ ] TrOCR
- [ ] UniSpeech
- [ ] UniSpeech-SAT
- [ ] VAN
- [x] ViT
- [ ] Vilt
- [ ] VisualBERT
- [ ] Wav2Vec2
- [ ] WavLM
- [ ] XGLM
- [x] XLM
- [ ] XLMProphetNet
- [x] XLM-RoBERTa
- [x] XLM-RoBERTa-XL
- [ ] ๐ ๏ธ XLNet
- [x] YOLOS
- [ ] Yoso
๐ ๏ธ next to a model suggests that the PR is in progress. If there is nothing next to a model, it means that ONNX does not yet support the model, and thus we need to add support for it.
If you need help implementing an unsupported model, here is a guide from HuggingFace's documentation.
If you want an example of implementation, I did one for CamemBERT months ago.
-
GPT-J
: #16274 -
FlauBERT
: #16279
-
LayoutLMv2
: #16309
Let me try with BigBird
-
Bigbird
#16427
Love the initiative here, thanks for opening an issue! Added the Good First Issue
label so that it's more visible :)
Love the initiative here, thanks for opening an issue! Added the
Good First Issue
label so that it's more visible :)
Thanks for the label. I don't know if it's easy to begin, but it's cool if more people see this and can contribute!
I would like to try with Luke. However, Luke doesn't support any features apart from default AutoModel. It's main feature is LukeForEntityPairClassification for relation extraction. Should I convert luke-base to Onnx or LukeForEntityPairClassification which has a classifier head?
Data2vecAudio
doesn't have ONNXConfig
yet. I write its ONNXConfig
according to Data2VecTextOnnxConfig
but it throws error. Can anyone help me?
from typing import Mapping, OrderedDict
from transformers.onnx import OnnxConfig
from transformers import AutoConfig
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
class Data2VecAudioOnnxConfig(OnnxConfig):
@property
def inputs(self):
return OrderedDict(
[
("input_values", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
)
config = AutoConfig.from_pretrained("facebook/data2vec-audio-base-960h")
onnx_config = Data2VecAudioOnnxConfig(config)
onnx_path = Path("facebook/data2vec-audio-base-960h")
model_ckpt = "facebook/data2vec-audio-base-960h"
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
errors
ValueError Traceback (most recent call last)
/var/folders/2t/0w65vdjs2m32w5mmzzgtqrhw0000gn/T/ipykernel_59977/667985886.py in <module>
27 tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
28
---> 29 onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
~/miniconda3/lib/python3.9/site-packages/transformers/onnx/convert.py in export(tokenizer, model, config, opset, output)
255
256 if is_torch_available() and issubclass(type(model), PreTrainedModel):
--> 257 return export_pytorch(tokenizer, model, config, opset, output)
258 elif is_tf_available() and issubclass(type(model), TFPreTrainedModel):
259 return export_tensorflow(tokenizer, model, config, opset, output)
~/miniconda3/lib/python3.9/site-packages/transformers/onnx/convert.py in export_pytorch(tokenizer, model, config, opset, output)
112
113 if not inputs_match:
--> 114 raise ValueError("Model and config inputs doesn't match")
115
116 config.patch_ops()
ValueError: Model and config inputs doesn't match
I would like to try with Luke. However, Luke doesn't support any features apart from default AutoModel. It's main feature is LukeForEntityPairClassification for relation extraction. Should I convert luke-base to Onnx or LukeForEntityPairClassification which has a classifier head?
When you implement the ONNX Config for a model it's working for all kind of task, because the base model and the ones pre-packaged for fine-tuning have the same inputs.
So you can base your implementation on the base model and other tasks will work too.
-
LUKE
#16562 from @aakashb95 :+1:
Still learning
Issue description
Hello, thank you for supporting GPTJ with ONNX. But when I exported an ONNX checkpoint using transformers-4.18.0, I got the issue like below.
(venv) root@V100:~# python -m transformers.onnx --model=gpt-j-6B/ onnx/
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/data/lvenv/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 99, in <module>
main()
File "/data/venv/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 62, in main
raise ValueError(f"Unsupported model type: {config.model_type}")
ValueError: Unsupported model type: gptj
I found GPTJ with ONNX seems supported when I checked your document transformers-4.18.0 [https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture] and code [src/transformers/onnx/features.py etc.]. But I still got this issue. And then, I checked the parameter of config.model_type in File "/data/venv/lib/python3.8/site-packages/transformers/onnx/main.py", which is related to two parameters [from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES, from ..models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES]. I did not find GPTJ's config in these configs. It seems not sensible.
Environment info
- Platform: Ubuntu 20.04.2
- python: 3.8.10
- PyTorch: 1.10.0+cu113
- transformers: 4.18.0
- GPU: V100
Hello, thank you for supporting GPTJ with ONNX. But when I exported an ONNX checkpoint using transformers-4.18.0, I got the issue like below.
Hello @pikaqqqqqq, thanks for reporting the problem. I opened a PR with a quick fix to avoid this problem, check #16780
-
ConvBERT
: #16859
Hello ๐๐ฝ, I added RoFormer onnx config here #16861, I'm not 100% sure who to ask for review so I'm posting this here. Thanks ๐๐ฝ
Hi! I would like try building the ONNX config for Reformer
.
Hi! I would like try building the ONNX config for
Reformer
.
Hi @Tanmay06 that would be awesome. Don't hesitate to open a PR with your work when you feel it's quite good. You can ping me anytime if you need help!
Hello! I would like to work on ONNX config for ResNet
.
Hello! I would like to work on ONNX config for
ResNet
.
Nice, don't hesitate to ping me if help is needed :hugs:
Hi! I would like to work on ONNX config for BigBirdPegasus
.
Hi! I would like to work on ONNX config for
BigBirdPegasus
.
Hi, nice! If you need help you can tag me.
#17027 Here is one for XLNet!
#17029 PR for MobileBert.
#17030 Here is the PR for XLM
#17078 PR for BigBirdPegasus
#17213 for Perceiver
#17176 for Longformer
(work in progress ๐ง help appreciated)
Hi @ChainYo , I would like to work on getting the ONNX Config for SqueezeBert . Thanks!
@ChainYo I would like to get started on the ONNX Config for DeBERTaV2!
@ChainYo, I would like to get started on the ONNX Config for DeBERTaV2!
Ok noted! Ping me on the PR you open if you need help. ๐ค
@ChainYo ResNet and ConvNeXT are now supported, see #17585 and #17627
@ChainYo ResNet and ConvNeXT are now supported, see #17585 and #17627
Super cool! I update the list then.
Updated the list to leverage Github tasks instead. You can now see that 33 of the 93 models are supported. However, there are some new models to be added to the list which were added in v4.20
Updated the list to leverage Github tasks instead. You can now see that 33 of the 93 models are supported. However, there are some new models to be added to the list which were added in v4.20
Thanks! I will update the list very soon then.
EDIT: I Added 13 new models to the list! ๐
@ChainYo OPT-30B here https://github.com/huggingface/transformers/pull/17771
@ChainYo Support for DETR has been added in #17904
@ChainYo Support for LayoutLMv3 has been added in #17953
@ChainYo Support for LayoutLMv3 has been added in #17953
So cool, thanks a lot! @regisss
@ChainYo I have added LeViT support here https://github.com/huggingface/transformers/pull/18154
Actually I was using Hugging Face for a while and wanted to contribute, 16308 I was trying Swin Transformer one, following . .. replicating ViT one: here , running tests I got,
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'
the same was case in ViT implementation, the test gave same errors. also some DETR issue was also present for both ViT and swin Transformer (some related to from _lzma import *)
Actually I was using Hugging Face for a while and wanted to contribute, 16308 I was trying Swin Transformer one, following . .. replicating ViT one: here , running tests I got,
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime' FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime' FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime' FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_120_swin_default - ModuleNotFoundError: No module named 'onnxruntime' FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_121_swin_image_classification - ModuleNotFoundError: No module named 'onnxruntime' FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_122_swin_masked_im - ModuleNotFoundError: No module named 'onnxruntime'
the same was case in ViT implementation, the test gave same errors. also some DETR issue was also present for both ViT and swin Transformer (some related to from _lzma import *)
@bibhabasumohapatra you have to install ONNX runtime:
pip install '.[onnxruntime]'
Hello @ChainYo I would like to contribute the Onnx config for the Decision Transformer, I'd love some guidance on how to go about this as this is my first contribution, really appreciate your guidance in getting me off the ground.
Hello @ChainYo I would like to contribute the Onnx config for the Decision Transformer, I'd love some guidance on how to go about this as this is my first contribution, really appreciate your guidance in getting me off the ground.
@skanjila You can check how to do it here, it is well described: https://huggingface.co/docs/transformers/v4.20.1/en/serialization#exporting-a-model-for-an-unsupported-architecture Do not hesitate to open a PR and we will support you if needed :)
Hello @ChainYo I would like to contribute the Onnx config for the Decision Transformer, I'd love some guidance on how to go about this as this is my first contribution, really appreciate your guidance in getting me off the ground.
@skanjila You can check how to do it here, it is well described: https://huggingface.co/docs/transformers/v4.20.1/en/serialization#exporting-a-model-for-an-unsupported-architecture Do not hesitate to open a PR, and we will support you if needed :)
Thanks @regisss, for the appropriate link!
There are also multiple merged PR that adds ONNX Config that can give you an idea of all files you need to update to make it works. Look at: #16274, #16279, #16427, #17213 or #18154
Sometimes there are some edge cases where you have to implement more things (like LayoutLMv3), but we could help you through the PR when you start implementing it, as @regiss said!
@ChainYo will get started on this, thank you so much for your help, expect a PR coming soon
Hi :hugs:
I am facing this error while trying to load up my DeBERTaV3 model into a ONNX format.
2022-07-21 10:21:22.414296: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Some weights of the model checkpoint at deberta-v3-large-conll-doccano/ were not used when initializing DebertaV2Model: ['classifier.bias', 'classifier.weight']
- This IS expected if you are initializing DebertaV2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaV2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 76, in main
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature)
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 519, in check_supported_model_or_raise
model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 422, in get_supported_features_for_model_type
f"{model_type_and_model_name} is not supported yet. "
KeyError: "deberta-v2 is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'camembert', 'convbert', 'convnext', 'data2vec-text', 'deit', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'longt5', 'marian', 'mbart', 'mobilebert', 'm2m-100', 'perceiver', 'resnet', 'roberta', 'roformer', 'squeezebert', 't5', 'vit', 'xlm', 'xlm-roberta'] are supported. If you want to support deberta-v2 please propose a PR or open up an issue."
Though, @ChainYo in your original post you mention DeBERTaV2 should be available. Based on this error, I feel like there might be a bug in this.
Could someone help me sort this out or tell me how to get past this?
(if this is the wrong place to ask this, I have a post on the HF Community ref link)
thanks
Hi hugs
I am facing this error while trying to load up my DeBERTaV3 model into a ONNX format.
2022-07-21 10:21:22.414296: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Some weights of the model checkpoint at deberta-v3-large-conll-doccano/ were not used when initializing DebertaV2Model: ['classifier.bias', 'classifier.weight'] - This IS expected if you are initializing DebertaV2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DebertaV2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 76, in main model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 519, in check_supported_model_or_raise model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/features.py", line 422, in get_supported_features_for_model_type f"{model_type_and_model_name} is not supported yet. " KeyError: "deberta-v2 is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'camembert', 'convbert', 'convnext', 'data2vec-text', 'deit', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'longt5', 'marian', 'mbart', 'mobilebert', 'm2m-100', 'perceiver', 'resnet', 'roberta', 'roformer', 'squeezebert', 't5', 'vit', 'xlm', 'xlm-roberta'] are supported. If you want to support deberta-v2 please propose a PR or open up an issue."
Though, @ChainYo, in your original post, you mention DeBERTaV2 should be available. Based on this error, I feel like there might be a bug in this.
Could someone help me sort this out or tell me how to get past this?
(if this is the wrong place to ask this, I have a post on the HF Community ref link)
thanks
Hey @Bhavnick-Yali, can you tell me which version of Transformers
you are using? Because I see Deberta on available models in the source code.
Check it here: https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/features.py#L240
Hi @ChainYo! :hugs:
I am working on Colab and using the !pip install transformers[onnx]
command so I believe it's the latest version.
Yes, that's the issue, I see it as well but it gives me the error while trying to run it. So I tried something else as well after this ref: link where I copied the onnxconfig class of DeBERTaV2 but even that was resulting in a unreadable error.
I am working on Colab and using the
!pip install transformers[onnx]
command, so I believe it's the latest version.
Could you verify it precisely, please? To be sure, it's not a Colab problem with an older version.
import transformers
print(transformers.__version__)
Could you also try to install transformers from the main GitHub branch?
$ pip install git+https://github.com/huggingface/transformers.git@main
@ChainYo I'd like to work on LongT5.
Edit : Taking up Pegasus instead since there already seems to be an implementation for LongT5 :-D
Hi @ChainYo! :hugs:
Colab version says 4.20.1 which was 22 June Release and should be having the DeBERTaV2 config
It isn't working in this version as I tried earlier.
Using the main GitHub branch it installs 4.21.0.dev0 version, from which the ONNX conversion works. Not sure what the issue is.
Thanks!
Colab version says 4.20.1, which was the 22 June Release and should have the DeBERTaV2 config !
Are you sure about this?
Using the main GitHub branch, it installs 4.21.0.dev0 version, from which the ONNX conversion works. Not sure what the issue is.
I'm glad it solved your problem! :fireworks:
@ChainYo would love to take up CLIP if there's no one working on it yet?
@ChainYo I'd like to take up VisualBERT if no one is working on it yet?
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model...
Traceback (most recent call last):
File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module>
main()
File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main
validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs
session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: https://github.com/openai/CLIP/pull/219 Does that change need to be included inside transformers as well?
Does that change need to be included inside transformers as well?
Yes, modeling files are often updated to work with ONNX or torch.fx for instance (as long as the changes are minimal).
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?
Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?
Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.
Sure, I"ll open the PR, happy to work on it
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?
Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.
Added the PR here: https://github.com/huggingface/transformers/pull/18515
added PR for OWLViT : https://github.com/huggingface/transformers/pull/18588
Hi!, just wondering when are all this new configs going to be included? Wich release! Great work, will try to add one or two myself
Hi!, just wondering when are all this new configs going to be included? Wich release! Great work, will try to add one or two myself
Hey @irg1008, it's integrated continuously with each transformers
release. If you are looking for a model that is not available in the last version, you can still install the package with the main branch:
pip install git+https://github.com/huggingface/transformers.git