IGNN
IGNN copied to clipboard
the question of Nan
Hello, I used your graph construction, graph aggregation module and my own data for training. During the training process, I found that increasing the size of the input data will cause Loss to appear'Nan'. At this time, if the batchsize is set to 1, loss will become normal. I can guarantee that my input data is correct and does not contain the ‘Nan’ value. So what is the reason for this situation, is it because the backpropagation of the imtopatch operation may cause the gradient to explode? Is there any solution?