pytorch-classification-uncertainty icon indicating copy to clipboard operation
pytorch-classification-uncertainty copied to clipboard

Fail to train in mini-Imagenet

Open Germany321 opened this issue 2 years ago • 5 comments

I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.

Germany321 avatar Feb 09 '22 12:02 Germany321

I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.

I think you can modify the parameters:

def edl_loss(self, func, y, alpha, annealing_step, device="cuda"):
        y = self.one_hot_embedding(y)
        y = y.to(device)
        alpha = alpha.to(device)
        S = torch.sum(alpha, dim=1, keepdim=True)
        
        A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True)

        # annealing_coef = torch.min(
        #     torch.tensor(1.0, dtype=torch.float32),
        #     torch.tensor(self.epoch / annealing_step, dtype=torch.float32),
        # )
        annealing_coef = 0.1

        kl_alpha = (alpha - 1) * (1 - y) + 1
        kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device)
       
        return A + kl_div

set annealing_coef as 0.1 or lower setting will work, do not set 1, it's too large.

RuoyuChen10 avatar Apr 01 '22 06:04 RuoyuChen10

I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.

I think you can modify the parameters:

def edl_loss(self, func, y, alpha, annealing_step, device="cuda"):
        y = self.one_hot_embedding(y)
        y = y.to(device)
        alpha = alpha.to(device)
        S = torch.sum(alpha, dim=1, keepdim=True)
        
        A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True)

        # annealing_coef = torch.min(
        #     torch.tensor(1.0, dtype=torch.float32),
        #     torch.tensor(self.epoch / annealing_step, dtype=torch.float32),
        # )
        annealing_coef = 0.1

        kl_alpha = (alpha - 1) * (1 - y) + 1
        kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device)
       
        return A + kl_div

set annealing_coef as 0.1 or lower setting will work, do not set 1, it's too large.

Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?

xuhuali-mxj avatar Apr 23 '22 14:04 xuhuali-mxj

I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.

I think you can modify the parameters:

def edl_loss(self, func, y, alpha, annealing_step, device="cuda"):
        y = self.one_hot_embedding(y)
        y = y.to(device)
        alpha = alpha.to(device)
        S = torch.sum(alpha, dim=1, keepdim=True)
        
        A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True)

        # annealing_coef = torch.min(
        #     torch.tensor(1.0, dtype=torch.float32),
        #     torch.tensor(self.epoch / annealing_step, dtype=torch.float32),
        # )
        annealing_coef = 0.1

        kl_alpha = (alpha - 1) * (1 - y) + 1
        kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device)
       
        return A + kl_div

set annealing_coef as 0.1 or lower setting will work, do not set 1, it's too large.

Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?

I haven't tried this repo, but I have tried to train an 8631 ID face recognition network, we use resnet-100 is ok, can get the same accuracy on the in-distribution dataset as the softmax training method, resnet-50 can't get to convergence. Another thing is that we all find the KL loss will damage the accuracy, try to decrease the coefficients or just remove it.

RuoyuChen10 avatar Apr 23 '22 16:04 RuoyuChen10

I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.

I think you can modify the parameters:

def edl_loss(self, func, y, alpha, annealing_step, device="cuda"):
        y = self.one_hot_embedding(y)
        y = y.to(device)
        alpha = alpha.to(device)
        S = torch.sum(alpha, dim=1, keepdim=True)
        
        A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True)

        # annealing_coef = torch.min(
        #     torch.tensor(1.0, dtype=torch.float32),
        #     torch.tensor(self.epoch / annealing_step, dtype=torch.float32),
        # )
        annealing_coef = 0.1

        kl_alpha = (alpha - 1) * (1 - y) + 1
        kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device)
       
        return A + kl_div

set annealing_coef as 0.1 or lower setting will work, do not set 1, it's too large.

Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?

I haven't tried this repo, but I have tried to train an 8631 ID face recognition network, we use resnet-100 is ok, can get the same accuracy on the in-distribution dataset as the softmax training method, resnet-50 can't get to convergence. Another thing is that we all find the KL loss will damage the accuracy, try to decrease the coefficients or just remove it.

Thank you so much. It still don't works. I think I may need to fine-tune other hyperparameters.

xuhuali-mxj avatar May 01 '22 07:05 xuhuali-mxj

I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.

I think you can modify the parameters:

def edl_loss(self, func, y, alpha, annealing_step, device="cuda"):
        y = self.one_hot_embedding(y)
        y = y.to(device)
        alpha = alpha.to(device)
        S = torch.sum(alpha, dim=1, keepdim=True)
        
        A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True)

        # annealing_coef = torch.min(
        #     torch.tensor(1.0, dtype=torch.float32),
        #     torch.tensor(self.epoch / annealing_step, dtype=torch.float32),
        # )
        annealing_coef = 0.1

        kl_alpha = (alpha - 1) * (1 - y) + 1
        kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device)
       
        return A + kl_div

set annealing_coef as 0.1 or lower setting will work, do not set 1, it's too large.

Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?

I haven't tried this repo, but I have tried to train an 8631 ID face recognition network, we use resnet-100 is ok, can get the same accuracy on the in-distribution dataset as the softmax training method, resnet-50 can't get to convergence. Another thing is that we all find the KL loss will damage the accuracy, try to decrease the coefficients or just remove it.

Thank you so much. It still don't works. I think I may need to fine-tune other hyperparameters.

maybe you can refer this https://github.com/RuoyuChen10/FaceTechnologyTool/blob/master/FaceRecognition/evidential_learning.py, I have tried this on Face Recognition. I'm also failed before. I conclude it's mainly about:

  1. remove KL loss
  2. Learning Rate is important.
  3. The depth of the network.

Maybe the learning rate and depth of the network has few influences on softmax and Cross-Entropy Loss training method.

RuoyuChen10 avatar May 01 '22 07:05 RuoyuChen10