!torchrun train.py --config configs/demo.yaml may perform faster
Hello authors,
Thanks so much for sharing these codes. The codes are very useful to fine-tune SAM for downstream works : )
I reduced datasize, adapted the codes and run them in Google Colab with one A100 GPU. I found using torchrun is faster and more convenient compared to the original commands as shows below. https://pytorch.org/docs/stable/elastic/run.html
!torchrun train.py --config configs/demo.yaml
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml Besides loadddptrain.py seems wrong in the GitHub.
Thanks so much again. Kind regards.
Thanks for your comments!
Hello YunyaGaoTree
Thank you very much for your contribution. I would like to know the content of your loadddptrain.py file? I am currently unable to achieve single machine multi card training.
Thank you again!
where is the loadddptrain.py?
Thank you very much for your reply on the solution. We have also reproduced the relevant code ourselves and completed distributed training. Thank you again for your reply.
------------------ 原始邮件 ------------------ 发件人: "tianrun-chen/SAM-Adapter-PyTorch" @.>; 发送时间: 2023年12月8日(星期五) 中午12:06 @.>; @.@.>; 主题: Re: [tianrun-chen/SAM-Adapter-PyTorch] !torchrun train.py --config configs/demo.yaml may perform faster (Issue #39)
where is the loadddptrain.py?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Thank you very much for your reply on the solution. We have also reproduced the relevant code ourselves and completed distributed training. Thank you again for your reply. … ------------------ 原始邮件 ------------------ 发件人: "tianrun-chen/SAM-Adapter-PyTorch" @.>; 发送时间: 2023年12月8日(星期五) 中午12:06 @.>; @.@.>; 主题: Re: [tianrun-chen/SAM-Adapter-PyTorch] !torchrun train.py --config configs/demo.yaml may perform faster (Issue #39) where is the loadddptrain.py? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
thanks, but it seems that you do not update the train.py?
Thank you very much for your reply on the solution. We have also reproduced the relevant code ourselves and completed distributed training. Thank you again for your reply. … ------------------ 原始邮件 ------------------ 发件人: "tianrun-chen/SAM-Adapter-PyTorch" @.>; 发送时间: 2023年12月8日(星期五) 中午12:06 @.>; @.@.>; 主题: Re: [tianrun-chen/SAM-Adapter-PyTorch] !torchrun train.py --config configs/demo.yaml may perform faster (Issue #39) where is the loadddptrain.py? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
sorry for the late reply, so glad to hear that you solved the problem!
Hello authors,
Thanks so much for sharing these codes. The codes are very useful to fine-tune SAM for downstream works : )
I reduced datasize, adapted the codes and run them in Google Colab with one A100 GPU. I found using torchrun is faster and more convenient compared to the original commands as shows below. https://pytorch.org/docs/stable/elastic/run.html
!torchrun train.py --config configs/demo.yaml
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml Besides loadddptrain.py seems wrong in the GitHub.
Thanks so much again. Kind regards.
Would I like to ask if you have a loadddptrain.py?
Hello, Thank you for sharing these codes! I noticed that the loadddptrain.py file seems to be missing, and I’m unable to run SAM2Adapter without it. Could you kindly provide this file?