optimum-intel
optimum-intel copied to clipboard
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Hello! I am interested in using [Omniparser V2](https://huggingface.co/microsoft/OmniParser-v2.0/blob/main/icon_caption/config.json) with Optimum-Intel. Upon inspection, is uses the same class from transformers as a Florence2, Florence2ForConditionalGeneration. Does this mean the code in the...
# What does this PR do? Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks...
我买的intel ultra 9 + cpu 32G内存 + npu 18G内存+GPU 18G内存,如何使用npu和GPU跑deepseek -R1,别让我失望啊,我花了大价钱买的电脑。我希望你们把教程弄出来给intel用户使用。请把用户当小白,傻瓜式步骤执行就可以安装成功。
Need support for Molmo models: https://huggingface.co/allenai/Molmo-7B-D-0924 Currently it gives below error: 
Exception has occurred: ValueError could not broadcast input array from shape (84934656,) into shape (9216,) File "C:\Users\admin\Desktop\convert_model.py", line 15, in model = OVModelForCausalLM.from_pretrained(model_id, export=True,quantization_config=quantization_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: could not broadcast input...
# What does this PR do? Hello, when I try assisted generation with optimum-intel and openvino, it complains about ``` AttributeError: 'OVModelForCausalLM' object has no attribute '_is_stateful'. Did you mean:...
# What does this PR do? Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks...
Hello I want to use On-device sLM using NPU which is currently equipped in "Intel(R) Core(TM) Ultra 5". However, although I confirmed the operation of CPU and iGPU in the...
Getting error while trying to convert the model zhengpeng7/BiRefNet to OpenVINO IR file. Command used: ------------------ optimum-cli export openvino --model "zhengpeng7/BiRefNet" BiRefNet --trust-remote-code