edgeai-yolov5 icon indicating copy to clipboard operation
edgeai-yolov5 copied to clipboard

keypoint prediction error

Open sankexin opened this issue 2 years ago • 5 comments

detect by this weight: image result: image I was beginning to think it was a post-process error,but when I print the pred of detect.py,the y of keypoint of the person is Almost the same size: tensor([[219.79369, 225.33395, 226.03517, 225.67001, 220.89984, 223.56926, 219.32716, 228.19868, 227.60158, 224.10791, 220.28212, 227.59915, 225.78040, 223.75421, 224.11528, 226.45985, 228.28322], [411.84937, 418.37204, 417.93945, 417.42044, 412.42886, 414.72122, 411.45743, 419.45306, 419.69476, 416.37424, 412.08868, 420.05914, 417.24695, 415.01300, 415.85199, 418.92923, 420.43335]]) 640x640 2 persons, Done. (0.047s) so, the newest Pre-training weights are also not correct! could you find find any reason and update zhe correct one? and will you update yolov5(6.0) ?there is no focus. and then I test Yolov5s6_pose_640_ti_lite, there is some bug: yolov5-pose/utils/plots.py, line 118, in plot_skeleton_kpts pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1])) IndexError: index 45 is out of bounds for dimension 0 with size 0

sankexin avatar Jan 05 '23 01:01 sankexin

I have solved by this way:

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(s*(4*sigmas**2)+1e-9)))*kpt_mask).mean() # not correct equation

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(2*(s*sigmas)**2+1e-9)))*kpt_mask).mean() lkpt += (self.kptloss(tkpt[i][:, 0::2], pkpt_x, kpt_mask) + self.kptloss(tkpt[i][:, 1::2], pkpt_y, kpt_mask)) / 2 reference: https://github.com/qinggangwu/yolov7-pose_Npoint_Ncla/blob/db5ebf25060f723cce084bdaafd277847ff38d27/utils/loss_Ncla.py

then train it myself.

sankexin avatar Jan 19 '23 07:01 sankexin

I have solved by this way:

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(s*(4*sigmas**2)+1e-9)))*kpt_mask).mean() # not correct equation

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(2*(s*sigmas)**2+1e-9)))*kpt_mask).mean() lkpt += (self.kptloss(tkpt[i][:, 0::2], pkpt_x, kpt_mask) + self.kptloss(tkpt[i][:, 1::2], pkpt_y, kpt_mask)) / 2 reference: https://github.com/qinggangwu/yolov7-pose_Npoint_Ncla/blob/db5ebf25060f723cce084bdaafd277847ff38d27/utils/loss_Ncla.py

then train it myself.

Hi! @sankexin

Are you using both of these equations?

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(2*(s*sigmas)**2+1e-9)))*kpt_mask).mean() lkpt += (self.kptloss(tkpt[i][:, 0::2], pkpt_x, kpt_mask) + self.kptloss(tkpt[i][:, 1::2], pkpt_y, kpt_mask)) / 2

Or just the one at the bottom?

nomaad42 avatar Mar 14 '23 07:03 nomaad42

这是来自QQ邮箱的假期自动回复邮件。   您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

sankexin avatar Mar 14 '23 07:03 sankexin

I have solved by this way:

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(s*(4*sigmas**2)+1e-9)))*kpt_mask).mean() # not correct equation

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(2*(s*sigmas)**2+1e-9)))*kpt_mask).mean() lkpt += (self.kptloss(tkpt[i][:, 0::2], pkpt_x, kpt_mask) + self.kptloss(tkpt[i][:, 1::2], pkpt_y, kpt_mask)) / 2 reference: https://github.com/qinggangwu/yolov7-pose_Npoint_Ncla/blob/db5ebf25060f723cce084bdaafd277847ff38d27/utils/loss_Ncla.py then train it myself.

Hi! @sankexin

Are you using both of these equations?

lkpt += kpt_loss_factor*((1 - torch.exp(-d/(2*(s*sigmas)**2+1e-9)))*kpt_mask).mean() lkpt += (self.kptloss(tkpt[i][:, 0::2], pkpt_x, kpt_mask) + self.kptloss(tkpt[i][:, 1::2], pkpt_y, kpt_mask)) / 2

Or just the one at the bottom?

hello! Are you work out this problem?

Song-Jie-ing avatar Apr 16 '24 07:04 Song-Jie-ing

lkpt += kpt_loss_

这是来自QQ邮箱的假期自动回复邮件。   您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

你好,请问这个问题你解决了吗

Song-Jie-ing avatar Apr 16 '24 07:04 Song-Jie-ing