C-3-Framework icon indicating copy to clipboard operation
C-3-Framework copied to clipboard

The epoch for every result

Open Ethanwl opened this issue 5 years ago • 11 comments

How many epochs to get the corresponding results of the dataset and model? The default setting in config.py is 100 and there is no explanation. And will you release the trained models?

Ethanwl avatar Mar 25 '19 10:03 Ethanwl

The current config.py dose not mean the default setting, which is only the latest setting for a specific experiments. If you want to reproduce the proposed results in readme, you should adopt the provided parameter setting in results_reports. Each experiments setting are saved in setting.py and config.py.

gjy3035 avatar Mar 26 '19 03:03 gjy3035

Do different mean and std have significant impact? Many projects use mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], such as official CSRNet https://github.com/leeyeehoo/CSRNet-pytorch/blob/ed29d895989c188cb913a9503721271c6cf1ab1f/train.py#L118

Ethanwl avatar Mar 26 '19 08:03 Ethanwl

I don't do the experiments using CSR's setting. But I think the performance is only affected slightly. In my other experiments (not shown in this repo), I mistakenly use the other dataset's mean and std to train the model, which is close to the result of correct setting.

gjy3035 avatar Mar 27 '19 01:03 gjy3035

Screenshot_2019-04-02 TensorBoard Screenshot_2019-04-0201 TensorBoard I use your pre-trained CSRNet model on GCC to train with UCF-QNRF for 180 epoches. The best mae 112.4 and mse 188.4 at epoch 161

Ethanwl avatar Apr 02 '19 11:04 Ethanwl

141_170_150 I use the CSRNet to predict and always get the pictures with red dots, such as the right picture. Do you meet the same problem?

Ethanwl avatar Apr 02 '19 11:04 Ethanwl

@gjy3035 Cound you realease the pre-trained SANet model on GCC?

Ethanwl avatar Apr 03 '19 01:04 Ethanwl

  1. The standard CSRNet may output the negative density for unseen scenes. If you want to avoid this problem, you would add ReLU layer on the top of CSR.
  2. I will provide the download link for all pre-trained models on GCC using random splitting evaluation ASAP.

gjy3035 avatar Apr 03 '19 03:04 gjy3035

I have updated the link for each pretrained model in readme. Please visit the page to download them.

gjy3035 avatar Apr 03 '19 08:04 gjy3035

Thank you for your pretrained model. I train the pretrained vgg_decoder model on QNRF and get the best mae 109.4 and mse 181.9 at epoch 317. I find that the mae and mse in https://github.com/gjy3035/C-3-Framework/blob/master/README.md are not the same as https://github.com/gjy3035/C-3-Framework/blob/master/results_reports/VGG_decoder/UCFQNRF/readme.md I have another two questions:

  1. For generating density map, the kernel of window size (μ)=15 and spread parameter (σ)=4 are used in all datasets. The paper W-Net: Reinforced U-Net for Density Map Estimation says this method is much better than others. Is there any internal mechanism to explain that?
  2. Some paper use patch-based training and testing scheme. I think if there is no Instance Normalization in the model, this method won't improve results. Right?

Ethanwl avatar Apr 04 '19 07:04 Ethanwl

@Ethanwl Also, I noticed that in GCC_cl/config.py line 23: __C.NET = 'Res50' # Which is mismatched from the txt result, and __C.NET should be VGG_DECODER. I doubted that the result and setting of VGG_Decoder may not be correct here.

Hi @gjy3035, could you pls check if the VGG_Decoder results are all correct?

erichhhhho avatar Feb 12 '20 13:02 erichhhhho

@erichhhhho Sorry, the config.pymay be mistakenly uploaded. See the log file, the net is "VGG_DECODER". We will upload the correct file. Before this, you can use the setting of the log file.

gjy3035 avatar Feb 12 '20 13:02 gjy3035