accelerated_features icon indicating copy to clipboard operation
accelerated_features copied to clipboard

Loss function

Open cw314 opened this issue 1 year ago • 3 comments

Why does a loss function I trained increase? image

cw314 avatar Sep 03 '24 08:09 cw314

您好,我也遇到了和您类似的问题,损失一直降不下来,并且评估指标很差。可以交流下吗。

zw-92 avatar Sep 27 '24 02:09 zw-92

您好,在实验过程中我加了一些数据增强后损失是下降的,但是匹配效果却变差了。顺便问一下您使用的是什么评估指标代码,能否提供相关评估代码我试验一下

------------------ 原始邮件 ------------------ 发件人: "verlab/accelerated_features" @.>; 发送时间: 2024年9月27日(星期五) 上午10:22 @.>; @.@.>; 主题: Re: [verlab/accelerated_features] Loss function (Issue #60)

您好,我也遇到了和您类似的问题,损失一直降不下来,并且评估指标很差。可以交流下吗。

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

cw314 avatar Sep 30 '24 06:09 cw314

Hello @zw-92, @cw314, thank you for bringing this issue! This is indeed strange at first sight, but it is the default behavior for this specific loss. Please see #35

Quoting my answer from the other issue: I experienced this during my training as well. After several empirical tests, I concluded that the reliability loss is easier to optimize when the descriptors are random (i.e., when the network is initialized with random weights). However, as training progresses and the descriptors become non-random, the network must learn to identify 'reliable' descriptors in the embedding space.

Basically, the network quickly minimizes the loss in the beginning because descriptors are random, but when they converge, it becomes more difficult to infer if they are reliable.

guipotje avatar Oct 01 '24 12:10 guipotje