fxmarty
fxmarty
Hi @Harini-Vemula-2382, thank you for the report. I can not reproduce the issue with `torch==2.2.2`, `onnx==1.16.0`, `onnxruntime==1.17.3` on Optimum main (installed from source). I'll do a release 1.19 today, let...
yes, for context: https://github.com/huggingface/optimum/pull/570#discussion_r1045939305 My understanding is that this choice is made because in the `_TASKS_TO_AUTOMODELS` mapping, we want to easily map `stable-diffusion` to `StableDiffusionPipeline` If we were to use...
@mht-sharma similar issue as with timm/transformers, as discussed in the PR
Hi @pradeepdev-1995 how is this related to optimum? You can try to use an absolute path instead of a relative path for your `model.onnx` file.
Hi @BowenBao, thank you for the PR! Yes I think moving to dynamo-based export is a good thing. One thing that is currently missing in Transformers, Diffusers, TIMM CI is...
@thiagocrepaldi In the current state, I think simply adding (slow) tests for dynamo, duplicating https://github.com/huggingface/optimum/blob/7e08a820b65a359a61444abe51df4eb96b26b2e3/tests/exporters/onnx/test_exporters_onnx_cli.py#L384 with a `@slow` decorator, and using `dynamo=True`, and running `RUN_SLOW=1 pytest tests/exporters/onnx -k "test_exporters_cli_pytorch_cpu" -s...
Thank you @BowenBao, feel free to fix in this PR.
@mmingo848 You can use: ```bash optimum-cli export onnx --help optimum-cli export onnx --model openai/whisper-large-v3 whisper_onnx ``` and then use [ORTModelForSpeechSeq2Seq](https://huggingface.co/docs/optimum/main/en/onnxruntime/package_reference/modeling_ort#optimum.onnxruntime.ORTModelForSpeechSeq2Seq). Although `decoder_model.onnx` and `decoder_with_past_model.onnx` are saved in the output folder,...
@MrRace Yes it can happen, I would not be worried. We should improve the warning.
@MrRace You need `--task automatic-speech-recognition-with-past`. There should be a log during the export about it (that specifying `--task automatic-speech-recognition` disables KV cache).