torchfold
torchfold copied to clipboard
Can it be used with torch's dataparallel to train model on multi-gpus?
I want to know how I can use it with dataparallel. What should I change in my code when I build my model on several gpus?
Have you solved this problem? I meet the same issue... i'm wondering If you could tell me how you deal with it?
兄dei,好像是同胞?这个问题我当时解决掉了,不知道你解决掉了没有,如果需要的话有空我给你找找
------------------ 原始邮件 ------------------ 发件人: "LemonMi"<[email protected]>; 发送时间: 2019年12月31日(星期二) 下午5:59 收件人: "nearai/torchfold"<[email protected]>; 抄送: "10671469"<[email protected]>; "Author"<[email protected]>; 主题: Re: [nearai/torchfold] Can it be used with torch's dataparallel to train model on multi-gpus? (#7)
Have you solved this problem? I meet the same issue... i'm wondering If you could tell me how you deal with it?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
@guobaisong 哈哈是同胞。。 我目前是把所有用到的网络都放到nn.DataParallel()里了,运行倒是没问题,就是不知道对不对。如果方便的话,可能得麻烦你有空找一下,非常感谢
就是可以直接这样用的,没有问题~我当时应该也是这样搞的
------------------ 原始邮件 ------------------ 发件人: "LemonMi"<[email protected]>; 发送时间: 2020年1月5日(星期天) 下午5:59 收件人: "nearai/torchfold"<[email protected]>; 抄送: "10671469"<[email protected]>; "Mention"<[email protected]>; 主题: Re: [nearai/torchfold] Can it be used with torch's dataparallel to train model on multi-gpus? (#7)
@guobaisong 哈哈是同胞。。 我目前是把所有用到的网络都放到nn.DataParallel()里了,运行倒是没问题,就是不知道对不对。如果方便的话,可能得麻烦你有空找一下,非常感谢
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@guobaisong 好的好的~谢谢
哈哈哈没事没事,当时我找了好久怎么train不同形状的树
------------------ 原始邮件 ------------------ 发件人: "LemonMi"<[email protected]>; 发送时间: 2020年1月5日(星期天) 晚上6:04 收件人: "nearai/torchfold"<[email protected]>; 抄送: "10671469"<[email protected]>; "Mention"<[email protected]>; 主题: Re: [nearai/torchfold] Can it be used with torch's dataparallel to train model on multi-gpus? (#7)
@guobaisong 好的好的~谢谢
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.