llama.cpp
llama.cpp copied to clipboard
Support for InternVL
New InternVL-Chat-V1.5 just came out, and the quality is really great, and the benchmark score is pretty high too. Possibly best open source vision language model yet?
Can we have llama.cpp to support it? @cmp-nct, @cjpais, @danbev, @monatis, has any of you tried it?
Demo: https://internvl.opengvlab.com/
Would be great
I am working on a few projects right now, but if I get a chance I will try to get support in (assuming it doesn't already work). I would also like to get moondream support in
+1
fwiw moondream support was merged in #6899, haven't had a chance to look at/try internvl
I would really like to get InternVL support in llama.cpp.
I have tested the demo extensively and it is really good, so much so that I feel like it is a game changer in many ways. But running it on consumer hardware is not possible right now.
As noted here: https://github.com/InternLM/lmdeploy/issues/1501#issuecomment-2078558853
architecture: InternViT-6B-448px-V1-5 + MLP + InternLM2-Chat-20B I am afraid it cannot fit into A10 (24G) even though LLM weights are quantized into 4 bits.
Is it possible to GGUF the weights to allow for multi GPU splitting or splitting layers between CPU RAM and VRAM? Adding support for InternVL 1.5 would also (probably) make it easier to support future versions when they come out.
@cjpais Hello, may I ask what is the progress of internvl support now? We are looking forward to using it on llama.cpp.
Hey I am quite busy with a few projects, it's on my list but just not very high priority at the moment. It's really only something I can do in my spare/free time
Hey I am quite busy with a few projects, it's on my list but just not very high priority at the moment. It's really only something I can do in my spare/free time
Thank you for your reply. Thank you for your hard work. Looking forward to your future work.
Which one would be better to focus: CogVLM or InternVL?
I wish there is more resource/interest for language vision models among the llama.cpp community. Llama.cpp is the only hope to run newer language vision models on Apple Silicon. Especially since flash attention python library is not available for Apple Sillicon, you can't even run inference using Torch with MPS support. :(
Which one would be better to focus: CogVLM or InternVL?
I wish there is more resource/interest for language vision models among the llama.cpp community. Llama.cpp is the only hope to run newer language vision models on Apple Silicon. Especially since flash attention python library is not available for Apple Sillicon, you can't even run inference using Torch with MPS support. :(
Please internVL,. In my tests it works better than CogVLM. Especially for stuff like receipts and documents.
InternVL is quite good. Benchmarks, HF, Demo.
how about now? any update?
upvote for this
InternLM-XComposer-2.5-7b is out now out and having only tested the image capabilities, it seems great. HF, Demo.
This would be great!
Any status on this. this is currently highest performing Vision LLM from user's tests on LocalLLama reddit.
Any updates?
嘿,我有几个项目很忙,它在我的清单上,但目前优先级并不多。这真的只是我可以在业余时间做的事情
I tested the now available InternVL2 model and it is indeed a great choice, I hope to give it a higher priority, thank you for your hard work.
InternVL2 would be great to have! Seems to be SOTA in open source vision LLMs
Any thoughts on this? Since Vision models varies alot , compare to LLM models do Maintainers thinks LLamacpp should be focusing on supporting it? Since there are already a lot of LLM models coming out and the core team is doing tremendous work on those already. Do core team feels VLMs should be supported outside of llamacpp project? May be addon/extention architecture viable?
This would be a gamechanger! @cjpais
I'm sorry I don't know when I can do this, I have a huge backlog of projects I'm currently working on! I am very curious to try it but unfortunately it's not very high priority for me right now
InternVL2 would be great to have! Seems to be SOTA in open source vision LLMs
+1
I think model builder should contribute their vision model works in here.
I think model builder should contribute their vision model works in here.
In an ideal situation,it's model builder's work! but sadly, maybe their work not focus on device,or they have self-deploy server framework,such as LMDeploy.
So, I really hope llama.cpp contributor can support this model, it is really good!
I think the devs can add their own branches to the llama.cpp repo or huggingface.co? The 2.5 version of InternVL also got released..can take it a try for transfer as a helper if needed.
I think model builder should contribute their vision model works in here.
In an ideal situation,it's model builder's work! but sadly, maybe their work not focus on device,or they have self-deploy server framework,such as LMDeploy.
If they want to be popular and used by many , that would be the case.
LMDeploy is full of bufferoverflow crashes , not recommended for any secure deployment.