Ella Charlaix
Ella Charlaix
To fix the code style test you can do the following : ``` pip install .[quality] make style ```
Hi @tsmith023, Apologies for the late reply, yes [`MarianMT`](https://huggingface.co/docs/transformers/model_doc/marian) models are supported. Concerning the slow inference you're reporting, are you comparing the resulting OpenVINO model with the original PyTorch model...
Hi @rajeevsrao, could you share the script you used for the export ?
> It should reside in optimum. Cc: @echarlaix You mean patching the model in optimum ? Depending on the modifications needed, it could make sense to have it in `diffusers`...
Hi @saikrishna2893, Conversion to fp16 is enabled in `optimum` through the [ORTOptimizer](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/optimization#optimization) but stable diffusion models are not yet supported. This could be additionally integrated in the [ONNX export CLI](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli)...
> Conversion to fp16 is enabled in `optimum` through the [ORTOptimizer](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/optimization#optimization) but stable diffusion models are not yet supported. This could be additionally integrated in the [ONNX export CLI](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) in...
I think we're close to merge, just waiting for couple of points above to be addressed, let me know if you need any help from my side @dtrawins (fixing conflicts...
Closing it to keep discussion in https://github.com/huggingface/optimum-intel/issues/561
Hi @theoctopusride, It looks like you're running out of memory during conversion (which is a memory-intensive step that can require up to multiple times your model size in space). If...
> neural_compressor is going to become a `framework` right ? (`library_tag` in the README) . Makes sense to me ! An other option could be to have `optimum` as `library_tag`,...