Yucheng Han
Yucheng Han
Hi Kaiyang, I think with a slight modification, the model could run on ImageNet if using more than one graphic cards. For CoOp, just change the code on line 257...
I had to say the way using DataParallel on CustomCLIP cannot save GPU memory on each card at all...
> No. DataParallel won't help. > > The problem for imagenet is that the 1,000 classes would create a huge memory consumption for the text encoder. So for smaller datasets...
Emm, I am not sure whether you noticed. I indeed have run experiments and verified the method's effectiveness... I tested it on 1-shot ImageNet with 4 graphic cards, and found...
> No. DataParallel won't help. > > The problem for imagenet is that the 1,000 classes would create a huge memory consumption for the text encoder. So for smaller datasets...
appalled to see this problem still not solved yet... Any plan on this?@SlongLiu
> hello? have you found the reasons? I think the author tried to solve the problem by this commit: https://github.com/haotian-liu/LLaVA/commit/871bbd4a5248510156629e00ee5042b74f74764c In this commit, I found the previous solution to keep...
> 我使用docker方式安装,遇到这个问题后进入GrouningDino目录,手动执行了一次安装:python setup.py install 运行成功 seems that "pip install -e ." does not work. Only python setup.py install works.
> https://github.com/haotian-liu/LLaVA/blob/main/docs/LoRA.md#launch-a-model-worker Hi haotian, I have the similar question. 1. Is mm_proj tuned during the fine-tuning stage of Llava if using Lora? (I guess the answer should be yes) 2....
> > https://github.com/haotian-liu/LLaVA/blob/main/docs/LoRA.md#launch-a-model-worker > > Hi haotian, I have the similar question. > > 1. Is mm_proj tuned during the fine-tuning stage of Llava if using Lora? (I guess the...