CUDA out of memory.
Is it very demanding on the GPU?I use RTX3090 to run the code, it will show a memory overflow
The default parameters were designed to utilize a V100 GPU. Try to reduce the batch size and it should be working.
Thank you for your reply. I have tried to reduce the batchsize from 64 to 16, but this problem still exists. What parameters can be modified to make it work?The problem is shown in the figure above.
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年8月22日(星期天) 下午2:54 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [ZeroRin/BertGCN] CUDA out of memory. (#8)
默认参数被设计为使用 V100 GPU 。尽量减少批量大小,它应该工作。
— 您正在接收此消息,因为您创建了该线程。 直接回复此邮件,在 GitHub 上查看或取消订阅. GitHub Mobile 上的分流通知监督办或机器人.
seems that you failed to upload your figure. I tried the code on gpu with 12GB memory by reducing batchsize to 16, the model was trained succesfully. Probably something else is going wrong.
Thank you for your letter. Could you please send me the documents of source code and training steps? I want to try again.
------------------ 原始邮件 ------------------ 发件人: "ZeroRin/BertGCN" @.>; 发送时间: 2021年8月23日(星期一) 下午5:32 @.>; @.@.>; 主题: Re: [ZeroRin/BertGCN] CUDA out of memory. (#8)
seems that you failed to upload your figure. I tried the code on gpu with 12GB memory by reducing batchsize to 16, the model was trained succesfully. Probably something else is going wrong.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
well so it seems to be impossible for a 6GB GPU to run this BERT - GCN model, :(
well so it seems to be impossible for a 6GB GPU to run this BERT - GCN model, :(
this implementation send the full graph into GCN during training since its memory cost is neglectable for a 32GB GPU. But for a small GPU graph sampling methods might be needed. Ideally the memory cost should be close to training the corresponding BERT model