NLNL-Negative-Learning-for-Noisy-Labels
NLNL-Negative-Learning-for-Noisy-Labels copied to clipboard
How to select the samples after NL?
After NL, the next step is SelNL, but i have problems with "py > 1/c". According to my understanding, the ideal condition is that the network will output a low probability corresponding to a complementary label after NL. If we select the samples with output probability over 1/c, did we select the data that can not be effectively splitted after NL? So what exactly the 'py' means? Or what the confidence 'py' represents?
After NL, the next step is SelNL, but i have problems with "py > 1/c". According to my understanding, the ideal condition is that the network will output a low probability corresponding to a complementary label after NL. If we select the samples with output probability over 1/c, did we select the data that can not be effectively splitted after NL? So what exactly the 'py' means? Or what the confidence 'py' represents?
I guess the y is the noisy label in trainset, not the complementary label.