vnet.pytorch icon indicating copy to clipboard operation
vnet.pytorch copied to clipboard

dice loss, backward problem

Open jasw1001 opened this issue 7 years ago • 5 comments

hi there, we ran the program, and successfully train a model. but when we deeply take a look into the program, we found the program ran loss.backward which used the pytorch autograd instead of the one written in dice_loss class.

so we are wondering the reason why a backward function was written, and how to call the manually written backward function???

jasw1001 avatar Aug 23 '17 07:08 jasw1001

Hi @jasw1001 . Did you find any solution for that? It seems it goes to the customized backward function but I cant debug it. My main problem is that I cant use multi-GPU for this backward function (it jumps out without any specific error). Do you have any idea how to used torch.autograde and skip this backward function?

hfarhidzadeh avatar Jan 10 '18 16:01 hfarhidzadeh

It seems that the torch.autograde cannot be used in Dice loss function, because some functions in Dice loss fuction are not supported by torch.autograde. For the debug problem, it's a bug in pytorch, so you may use 'ipdb' or 'pdb' to debug it.

jasw1001 avatar Jan 11 '18 01:01 jasw1001

Thanks @jasw1001 . Have you tried to implement your own version of loss function? I have used 'pdb' but it stuck in a line after #import pdb #pdb.set_trace() and never went through next lines. It is very strange for me.

hfarhidzadeh avatar Jan 11 '18 14:01 hfarhidzadeh

Hi @jasw1001 @CSMEDEEP could you give the steps as to how to start the training? The code uses some preprocessed files from the original Luna16 dataset. How do I run the preprocessing?

abhiML avatar Mar 26 '18 08:03 abhiML

@abhiML hello~ i really want to know how to preprocess the dataset LUNA16 to get the files "normalized_brightened_CT_2_5"etc.. , have you figure it out? thanks a lot~`

491506870 avatar Oct 08 '18 13:10 491506870