Thanks for your share
Hey, @yoctta Thanks for pretrained models. I want to know what is the ckpt_35.pth?
Please check the update, you should pretrain the backbone and init the multi-attention model with a pretrained checkpoint
From: IItaly Sent: 2021年7月6日 16:05 To: yoctta/multiple-attention Cc: yoctta; Mention Subject: [yoctta/multiple-attention] Thanks for your share (#5)
Hey, @yoctta Thanks for pretrained models. I want to know what is the ckpt_35.pth? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
I've updated main.py to clearify it. ckpt_35.pth should be a pretrained backbone model. From: @.*** @.> on behalf of IItaly @.>Sent: Tuesday, July 6, 2021, 16:05To: yoctta/multiple-attentionCc: yoctta; MentionSubject: [yoctta/multiple-attention] Thanks for your share (#5) Hey, @yoctta Thanks for pretrained models. I want to know what is the ckpt_35.pth?
—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe.
Thanks for your reply. Now I only have one gpu,but there is a distributed version.May I set the gpu_ids as '0'? Is it enough?
You can try to run it, I think the code can also work with 1 GPU, but limited batch size may be an issue.
OK. I'll have a try.Thank u