Zhe Chen
Zhe Chen
现在暂时还不支持,因为我的mac没这么大内存,还没法调试 😂
您好,V100可以参考这个回答:https://github.com/OpenGVLab/InternVL/issues/144
This is a good solution, thanks for your answer!
> 出去吃饭看到结果崩了会不会饭都吃不下 笑死hh,直接回去卷
Hello, I estimate that the effect of replacing the system message with this model is not good because a fixed system message was used during training, rather than training with...
Hi, see this guide for streaming output: https://internvl.readthedocs.io/en/latest/internvl2.0/quick_start.html#streaming-output
现在可以按照这个文档使用transformers进行多图推理: https://internvl.readthedocs.io/en/latest/internvl2.0/quick_start.html#inference-with-transformers
This model has a total of 26B, is it difficult to run with the CPU? For flash_attn, you can turn it off by modifying config.json in the model Set L20...
您好,目前代码这块在开启了deepspeed之后,没有对原本的save_best做兼容,所以不会有best model保存下来。
This issue has been inactive for two months, so I am closing it. If you have any further questions or encounter any problems, please feel free to reopen it. Thank...