APGCC icon indicating copy to clipboard operation
APGCC copied to clipboard

it will be appricated if the training code is completed!

Open Glorainow opened this issue 1 year ago • 12 comments

The evaluate_crowd_no_overlap function is not implemented. When would this be completed? image Look forward to try APGCC!

Glorainow avatar Jul 31 '24 13:07 Glorainow

Hello, recently I am also doing research work related to crowd counting, this is the evaluate_crowd_no_overlap function part that I provided for reference

`@torch.no_grad() def evaluate_crowd_no_overlap(model, val_dl, device): model.to(device) model.eval() # Inference is performed on all images to compute the MAE maes = [] mses = [] counts_pred, counts_true = [], [] # Storage of projected and real numbers img_id = [] # 创建储存图片id的列表 # Iterate through the validation dataset for samples, targets in val_dl: samples = samples.to(device) outputs = model(samples)

    # Assuming that the model output contains the predicted points and corresponding scores
    outputs_points = outputs['pred_points'][0]
    outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0]

    # Getting a count of real targets
    gt_cnt = targets[0]['point'].shape[0]

    # Filtering prediction points based on score thresholds
    threshold = 0.5
    points = outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist()
    predict_cnt = int((outputs_scores>threshold).sum())
    # accumulate MAE, MSE
    mae = abs(predict_cnt - gt_cnt)
    mse = (predict_cnt - gt_cnt) * (predict_cnt - gt_cnt)
    maes.append(float(mae))
    mses.append(float(mse))
    # Storage of projected and real numbers
    counts_pred.append(predict_cnt)
    counts_true.append(gt_cnt)
    img_id.append(int(targets[0]['image_id']))
# calc MAE, MSE
mae = np.mean(maes)
mse = np.sqrt(np.mean(mses))
# save_counts_to_file_sorted(f"{mse_dir}/counting_person_{mae:.2f}.txt", img_id, counts_pred)
return mae, mse`

Yu-Zhouz avatar Aug 27 '24 07:08 Yu-Zhouz

Hello, recently I am also doing research work related to crowd counting, this is the evaluate_crowd_no_overlap function part that I provided for reference

`@torch.no_grad() def evaluate_crowd_no_overlap(model, val_dl, device): model.to(device) model.eval() # Inference is performed on all images to compute the MAE maes = [] mses = [] counts_pred, counts_true = [], [] # Storage of projected and real numbers img_id = [] # 创建储存图片id的列表 # Iterate through the validation dataset for samples, targets in val_dl: samples = samples.to(device) outputs = model(samples)

    # Assuming that the model output contains the predicted points and corresponding scores
    outputs_points = outputs['pred_points'][0]
    outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0]

    # Getting a count of real targets
    gt_cnt = targets[0]['point'].shape[0]

    # Filtering prediction points based on score thresholds
    threshold = 0.5
    points = outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist()
    predict_cnt = int((outputs_scores>threshold).sum())
    # accumulate MAE, MSE
    mae = abs(predict_cnt - gt_cnt)
    mse = (predict_cnt - gt_cnt) * (predict_cnt - gt_cnt)
    maes.append(float(mae))
    mses.append(float(mse))
    # Storage of projected and real numbers
    counts_pred.append(predict_cnt)
    counts_true.append(gt_cnt)
    img_id.append(int(targets[0]['image_id']))
# calc MAE, MSE
mae = np.mean(maes)
mse = np.sqrt(np.mean(mses))
# save_counts_to_file_sorted(f"{mse_dir}/counting_person_{mae:.2f}.txt", img_id, counts_pred)
return mae, mse`

请问,你这部分代码能跑吗?结果如何?

nice98k avatar Aug 29 '24 02:08 nice98k

请问大家,有人能跑通训练代码吗?可以交流一下吗?

little-seasalt avatar Sep 06 '24 07:09 little-seasalt

请问大家,有人能跑通训练代码吗?可以交流一下吗?

我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果

Yu-Zhouz avatar Sep 18 '24 08:09 Yu-Zhouz

请问大家,有人能跑通训练代码吗?可以交流一下吗?

我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果

你好,能提供一下你跑通的代码吗?谢谢

nice98k avatar Sep 20 '24 02:09 nice98k

请问大家,有人能跑通训练代码吗?可以交流一下吗?

我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果

你好,能提供一下你跑通的代码吗?谢谢

在上述回答中,我已经提供了作者代码缺失的部分,可以尝试运行,如果有其他问题,可以再次联系我

Yu-Zhouz avatar Sep 20 '24 02:09 Yu-Zhouz

请问大家,有人能跑通训练代码吗?可以交流一下吗?

我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果

你好,能提供一下你跑通的代码吗?谢谢

在上述回答中,我已经提供了作者代码缺失的部分,可以尝试运行,如果有其他问题,可以再次联系我 请问你跑的效果和作者相差多少?

nice98k avatar Sep 20 '24 02:09 nice98k

您好! 请问跑通代码都需要做那些修改呢

susu-source avatar Oct 22 '24 10:10 susu-source

请问大家,有人能跑通训练代码吗?可以交流一下吗?

我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果

你好,我也能跑通,但是作者是不是并没有完全实现auxiliary point的部分。并不是论文描述的代码吧?

UntrainedButLoveCode avatar Nov 01 '24 06:11 UntrainedButLoveCode

请问大家,有人能跑通训练代码吗?可以交流一下吗?

我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果

你好,我也能跑通,但是作者是不是并没有完全实现auxiliary point的部分。并不是论文描述的代码吧?

Without the auxiliary point code, the auxiliary loss cannot be calculated (loss_auxiliary) unless we ignore the loss like in the testing part. But if we ignore it during training, will the result be good? Did anyone try to train the model?

Indraa145 avatar Nov 07 '24 08:11 Indraa145

I am interested in the training code as well

GioFic95 avatar Jun 04 '25 16:06 GioFic95

不使用辅助点损失部分,在SHHA和B上的效果没有PET模型的效果好,效果仅供参考,我只跑了800轮左右,模型不再继续收敛

Image

log.txt

Yu-Zhouz avatar Jun 05 '25 02:06 Yu-Zhouz