1SingleFeng
1SingleFeng
> 看了下直播回放,内容还是在讲纯技术实现,即类似lora的原理;请问是否可以在readme放出微调应用示例呢?或是给个参考之类的 请问有直播回放吗
> anyone has any comments, i think this should be a torch & AimetCommon version compatible issue Could you please tell me how to check the corresponding torch version of...
Hello, from my understanding, this project requires a GPU Compute Capacity higher than 8.0, but the RTX 2080Ti is only 7.5 (find on https://developer.nvidia.com/cuda-gpus). You can use AutoAWQ (https://github.com/casper-hansen/AutoAWQ), which...
> 按照model.safetensors.index.json,模型权重分为了model-00001-of-00004.safetensors,model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors进行保存,但是实际上只有三个模型权重文件,没有model-00001-of-00004.safetensors 您好,zero3可以跑多机多卡吗
> Another question, can you guys (i mean authors) share the quantize scripts? we need the script after sft this model. > Another question, can you guys (i mean authors)...
> https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for argument src in method wrapper_CUDA_scatter__src)...
> > > https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for argument src in...
> > > > > https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for argument...
@todaydeath 了解了,那就不清楚了,我目前还没有涉及到这些方面