APGCC
APGCC copied to clipboard
it will be appricated if the training code is completed!
The evaluate_crowd_no_overlap function is not implemented. When would this be completed?
Look forward to try APGCC!
Hello, recently I am also doing research work related to crowd counting, this is the evaluate_crowd_no_overlap function part that I provided for reference
`@torch.no_grad() def evaluate_crowd_no_overlap(model, val_dl, device): model.to(device) model.eval() # Inference is performed on all images to compute the MAE maes = [] mses = [] counts_pred, counts_true = [], [] # Storage of projected and real numbers img_id = [] # 创建储存图片id的列表 # Iterate through the validation dataset for samples, targets in val_dl: samples = samples.to(device) outputs = model(samples)
# Assuming that the model output contains the predicted points and corresponding scores
outputs_points = outputs['pred_points'][0]
outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0]
# Getting a count of real targets
gt_cnt = targets[0]['point'].shape[0]
# Filtering prediction points based on score thresholds
threshold = 0.5
points = outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist()
predict_cnt = int((outputs_scores>threshold).sum())
# accumulate MAE, MSE
mae = abs(predict_cnt - gt_cnt)
mse = (predict_cnt - gt_cnt) * (predict_cnt - gt_cnt)
maes.append(float(mae))
mses.append(float(mse))
# Storage of projected and real numbers
counts_pred.append(predict_cnt)
counts_true.append(gt_cnt)
img_id.append(int(targets[0]['image_id']))
# calc MAE, MSE
mae = np.mean(maes)
mse = np.sqrt(np.mean(mses))
# save_counts_to_file_sorted(f"{mse_dir}/counting_person_{mae:.2f}.txt", img_id, counts_pred)
return mae, mse`
Hello, recently I am also doing research work related to crowd counting, this is the evaluate_crowd_no_overlap function part that I provided for reference
`@torch.no_grad() def evaluate_crowd_no_overlap(model, val_dl, device): model.to(device) model.eval() # Inference is performed on all images to compute the MAE maes = [] mses = [] counts_pred, counts_true = [], [] # Storage of projected and real numbers img_id = [] # 创建储存图片id的列表 # Iterate through the validation dataset for samples, targets in val_dl: samples = samples.to(device) outputs = model(samples)
# Assuming that the model output contains the predicted points and corresponding scores outputs_points = outputs['pred_points'][0] outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0] # Getting a count of real targets gt_cnt = targets[0]['point'].shape[0] # Filtering prediction points based on score thresholds threshold = 0.5 points = outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist() predict_cnt = int((outputs_scores>threshold).sum()) # accumulate MAE, MSE mae = abs(predict_cnt - gt_cnt) mse = (predict_cnt - gt_cnt) * (predict_cnt - gt_cnt) maes.append(float(mae)) mses.append(float(mse)) # Storage of projected and real numbers counts_pred.append(predict_cnt) counts_true.append(gt_cnt) img_id.append(int(targets[0]['image_id'])) # calc MAE, MSE mae = np.mean(maes) mse = np.sqrt(np.mean(mses)) # save_counts_to_file_sorted(f"{mse_dir}/counting_person_{mae:.2f}.txt", img_id, counts_pred) return mae, mse`
请问,你这部分代码能跑吗?结果如何?
请问大家,有人能跑通训练代码吗?可以交流一下吗?
请问大家,有人能跑通训练代码吗?可以交流一下吗?
我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果
请问大家,有人能跑通训练代码吗?可以交流一下吗?
我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果
你好,能提供一下你跑通的代码吗?谢谢
请问大家,有人能跑通训练代码吗?可以交流一下吗?
我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果
你好,能提供一下你跑通的代码吗?谢谢
在上述回答中,我已经提供了作者代码缺失的部分,可以尝试运行,如果有其他问题,可以再次联系我
请问大家,有人能跑通训练代码吗?可以交流一下吗?
我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果
你好,能提供一下你跑通的代码吗?谢谢
在上述回答中,我已经提供了作者代码缺失的部分,可以尝试运行,如果有其他问题,可以再次联系我 请问你跑的效果和作者相差多少?
您好! 请问跑通代码都需要做那些修改呢
请问大家,有人能跑通训练代码吗?可以交流一下吗?
我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果
你好,我也能跑通,但是作者是不是并没有完全实现auxiliary point的部分。并不是论文描述的代码吧?
请问大家,有人能跑通训练代码吗?可以交流一下吗?
我已经成功跑通,不过使用作者的参数得到的效果并不好,未达到论文中的效果
你好,我也能跑通,但是作者是不是并没有完全实现auxiliary point的部分。并不是论文描述的代码吧?
Without the auxiliary point code, the auxiliary loss cannot be calculated (loss_auxiliary) unless we ignore the loss like in the testing part. But if we ignore it during training, will the result be good? Did anyone try to train the model?
I am interested in the training code as well