Results 13 comments of R. H.

Currently I directly set the computed density of GT as the clone of the input for compute_density() method, and set the distance to 999 just as other situation that you...

Here is the training setting I used: CUDA_VISIBLE_DEVICES='0' \ python -m torch.distributed.launch \ --nproc_per_node=1 \ --master_port=10001 \ --use_env main.py \ --lr=0.00001 \ --backbone="vgg16_bn" \ --ce_loss_coef=1.0 \ --point_loss_coef=5.0 \ --eos_coef=0.5 \...

I processed the dataset using the code you provided in preprocess_dataset.py, and customize the SHA.py to adjust NWPU dataset. Specially in compute_density() method, I added some code to deal with...

> > Currently I directly set the computed density of GT as the clone of the input for compute_density() method, and set the distance to 999 just as other situation...

> The training setting seems fine. Could you confirm that the format of point annotations is (y, x) instead of (x, y)? Wrong annotation format will lead to erroneous model...

> > > The training setting seems fine. Could you confirm that the format of point annotations is (y, x) instead of (x, y)? Wrong annotation format will lead to...

> > > > > The training setting seems fine. Could you confirm that the format of point annotations is (y, x) instead of (x, y)? Wrong annotation format will...

> > > > > > > The training setting seems fine. Could you confirm that the format of point annotations is (y, x) instead of (x, y)? Wrong annotation...

> > > > > The training setting seems fine. Could you confirm that the format of point annotations is (y, x) instead of (x, y)? Wrong annotation format will...

I use the same training settings as the author's. Maybe you can increase the total training epochs from 1500 to 3000. > 请问你调整参数了吗?我最近跑出的实验结果一直在52和53多,也不清楚是为什么