alveolar_canal icon indicating copy to clipboard operation
alveolar_canal copied to clipboard

NaN errors during canal_pretrain process

Open puppy2000 opened this issue 2 years ago • 2 comments

Sorry for bothering you.I get NaN errors during canal_pretrain process. image From another issue https://github.com/AImageLab-zip/alveolar_canal/issues/7 I find that my prediction will get NaN during training.Could you help me to find the problem.

puppy2000 avatar Jun 27 '23 08:06 puppy2000

Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training.

The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss.

When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs.

Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them.

Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1.

I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know.

LucaLumetti avatar Jun 27 '23 09:06 LucaLumetti

Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training.

The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss.

When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs.

Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them.

Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1.

I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know.

Hi,I double check the code,and I set the batch_size = 1 to debug.This time no NaNs error occur,and it seems the network is correctly trained.As you can see in this picture image So I wonder if it's because random patches are extracted from the original volume,and during DataParellel,some bad examples will cause NaNs error.Maybe it is caused by the badly generated laebl,because I observe that some generated label is not so well.I will check the code further.

puppy2000 avatar Jun 28 '23 02:06 puppy2000