optimum
optimum copied to clipboard
How can I export onnx-model for Qwen/Qwen-7B?
Feature request
I need to export the model named qwen to accelerate.
optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code
Motivation
I want to export the model qwen to use onnxruntime
Your contribution
I can give the input and output.
@smile2game Thank you. Qwen is not natively supported in Transformers (but Qwen2 is https://github.com/huggingface/transformers/pull/28436). I tried running the export for Qwen-7B and we get:
Traceback (most recent call last):
File "/home/felix/miniconda3/envs/fx/bin/optimum-cli", line 8, in <module>
sys.exit(main())
File "/home/felix/optimum/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/home/felix/optimum/optimum/commands/export/onnx.py", line 261, in run
main_export(
File "/home/felix/optimum/optimum/exporters/onnx/__main__.py", line 351, in main_export
onnx_export_from_model(
File "/home/felix/optimum/optimum/exporters/onnx/convert.py", line 1035, in onnx_export_from_model
raise ValueError(
ValueError: Trying to export a qwen model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen to be supported natively in the ONNX export.
which is expected. Have you checked: https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#customize-the-export-of-transformers-models-with-custom-modeling?