InternVL
InternVL copied to clipboard
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Great work building InternVL, I'm looking to deploy its inference as an endpoint. I wonder if anyon could help me with that. vLLM and TGI don't support that. What's your...
The finetune data for InternVL-Chat-v1.2 used 1.2M open-source data. Could you please specify what the 12M finetune data for v1.2 plus consists of?
If I have an 8 V100 machine, is there a way to load [InternVL-Chat-Chinese-V1-2-Plus](https://huggingface.co/OpenGVLab/InternVL-Chat-Chinese-V1-2-Plus) for inference, it seems can't install FlashAttention correctly on V100?
Can the adapter be retrained to work with Llama-3-8B, which should perform much better than Vincuna-13B?
Would it be possible to enhance the detection capability of InternVL by incorporating more data combined with grounding instructions during the fine-tuning stage?
请问有推荐的部署框架吗?
我查阅了几个LLM部署推理框架,目前似乎都不支持InternVL,请问项目组有推荐使用的框架吗?
多GPU推理报错
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:7!
InternVL-Chat-V1.2-Plus 根据您们提供的demo,这是我测试过所有的开源的(甚至仅提供demo的)最好的模型!!! 能提供量化的模型吗?因为这个模型文件共计80G,普通卡跑不起来; 如果量化为4bit,或者2bit的gguf格式的文件,可以普通显卡就可以跑。 而且越大大模型量化,精度损失越小。 希望能转化下,更加以飨读者。
Thanks for your awesome work. [swift](https://github.com/modelscope/swift) now supports inference, training of InternVL-Chat-V1.5 model For more information, please refer to our document - [English](https://github.com/modelscope/swift/tree/main/docs/source_en/Multi-Modal/internvl-best-practice.md) - [中文](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/internvl%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) For more questions, please raise...
