Results 4 issues of Owen Zhang

### What happened? Hi, I'm trying to compile llama with the hipBLAS backed on CPU: 12th Gen Intel(R) Core(TM) i9-12900K GPU: AMD Radeon PRO W7800 (gfx1100) OS: Windows 11 23H2...

bug-unconfirmed
low severity

### Feature request Hi, Would be great if optimum would support EXAONE models, e.g. https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct For onnx export, and optimization. ### Motivation To be able to run inferencing in directml...

feature-request
onnx

### System Info ```shell optimum==1.24.0 Python 3.12.4 ``` ### Who can help? Hi, When trying to convert this model to onnx format: https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct With this code: ``` from transformers import...

bug

## Motivation To be able to use amdmlss library selected kernels ## Technical Details Adds mlss_mha operator, fuse_mlss pass, and mlss jit compiler files as initial proposal of instruction replacement...