terminator123
terminator123
have u solved the problem? i have the same problem
i met the same problem too
> 你找到了吗,同求
> 你找到了吗,通求
> I had this issue where finetune_task_lora.sh doesn't create mm_projector.bin which also limited my usage of the finetuned_lora model. (I cannot merge or use it for inference). I changed extract_mm_projector...
how did you download the dataset coco/coco_dataset/val2014?
请问你解决了吗
> set "CUDA_DEVICE_MAX_CONNECTIONS" to 32 maybe you need in environment. pls have a try @yguo33 @gonggaohan @tginart RuntimeError: Using sequence parallelism requires setting the environment variable CUDA_DEVICE_MAX_CONNECTIONS to 1, when...
i have the same question, have u resolved it ?