Sreerag

Results 5 comments of Sreerag

I was able to solve this issue by installing the huggingface-hub package. ```pip install huggingface-hub===0.24.0```

Do we have an estimated timeline for when support for the model will be available?

I noticed that support for Qwen2.5 VL has been added. Could you please guide me on how to install the package that includes this feature?

I have tried the inference with nightly and it is working in CPU. But, - When running the inference on GPU, the output is appearing as !!!!!!!!!! and not in...

I had tried inferencing the model with OpenVINO 2025.2.0rc2 and converted the model using the below command: `optimum-cli export openvino --model Qwen/Qwen2.5-VL-7B-Instruct Qwen2.5-VL-7B-Instruct --weight-format int4 --trust-remote-code --sym --group-size 128 --backup-precision...