MiniCPM-V
MiniCPM-V copied to clipboard
Using v2.6 model with video on Mac
First of all, thank you for the amazing work! I'm wondering if there's a way to use the video input on Mac? I can load the v2.6 model on Llama.cpp, but it only accepts image input. When trying to load the 2.6 model with transformers.AutoModel, it requires flash attention. Unfortunately flash attention is not available on Mac. Is there a workaround for this? Thanks!
@chigkim You need this video understanding patch https://github.com/ggerganov/llama.cpp/pull/9165.patch It ran fine on the Mac
Thanks so much! The pr worked!
how about streaming?