learning-to-rank
                                
                                 learning-to-rank copied to clipboard
                                
                                    learning-to-rank copied to clipboard
                            
                            
                            
                        loss function compute problem
HI, when I read you code , it make me refused here, in the the LIstNet loss function:
def get_loss(self, x_t, y_t):
        # ---- start loss calculation ----
        ...
        p_true = F.softmax(F.reshape(y_t,(y_t.shape[0],y_t.shape[1])))
        xm = F.max(pred,axis=1,keepdims = True)
        logsumexp = F.logsumexp(pred,axis=1)
        logsumexp = F.broadcast_to(logsumexp,(xm.shape[0],pred.shape[1]))
        loss = -1 * F.sum( p_true * (pred - logsumexp) )
        ...
here use p_true * (red- logsumexp), why do like this, you are exactly not follow the paper to compute the loss function right?
HI, when I read you code , it make me refused here, in the the LIstNet loss function:
def get_loss(self, x_t, y_t): # ---- start loss calculation ---- ... p_true = F.softmax(F.reshape(y_t,(y_t.shape[0],y_t.shape[1]))) xm = F.max(pred,axis=1,keepdims = True) logsumexp = F.logsumexp(pred,axis=1) logsumexp = F.broadcast_to(logsumexp,(xm.shape[0],pred.shape[1])) loss = -1 * F.sum( p_true * (pred - logsumexp) ) ...here use p_true * (red- logsumexp), why do like this, you are exactly not follow the paper to compute the loss function right?
But why do you calculate the loss function in this way? is that work well?