ssd.pytorch icon indicating copy to clipboard operation
ssd.pytorch copied to clipboard

About ohem

Open IssacCyj opened this issue 7 years ago • 4 comments

loss_c = log_sum_exp(batch_conf) - batch_conf.gather(1, conf_t.view(-1, 1))

I'm confused with this line of code. Why not using conf_data to select hard examples? What is this loss_c stand for?

IssacCyj avatar Feb 05 '18 12:02 IssacCyj

@IssacCyj I'm confused with this line of code too. Do you understand why use loss_c instead of conf_data here now?

YangQun1 avatar Nov 24 '18 12:11 YangQun1

I'm confused about this process, too. Maybe someone can explain this.

JingyuanHu avatar Apr 18 '19 08:04 JingyuanHu

I'm also confused.

I don't understand why author minus below term. - batch_conf.gather(1, conf_t.view(-1, 1))

DonghoonPark12 avatar Jun 17 '19 10:06 DonghoonPark12

loss_c = log_sum_exp(batch_conf) - batch_conf.gather(1, conf_t.view(-1, 1))

I guess this author considers log_sum_exp part as an approximation to maximum function, which represents predicted confidence. And batch_conf.gather(1, conf_t.view(-1, 1)) just pick out target classes' confidence.

It's just my guess, but you can change it to torch.nn.functional.cross_entropy(batch_conf, conf_t.view(-1), reduction='none') or the like. Although the eventual rank would be a little different.

lujiazho avatar Jul 16 '22 21:07 lujiazho