HRMapNet icon indicating copy to clipboard operation
HRMapNet copied to clipboard

Result reproduce problem

Open gold-mango opened this issue 1 year ago • 6 comments

Hi team,

I train with the config hrmapnet_maptrv2_nusc_r50_24ep.py, and get val mAP 0.6242, when use a single GPU for validation, the value is 0.652. I can't get the result 67.2 in your report, could you provide your training log for comparing difference?

thanks

gold-mango avatar Oct 28 '24 02:10 gold-mango

Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently. hrmapnet_mapqr.log

Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy.

fishmarch avatar Oct 28 '24 03:10 fishmarch

Hi team,

I train with the config hrmapnet_maptrv2_nusc_r50_24ep.py, and get val mAP 0.6242, when use a single GPU for validation, the value is 0.652. I can't get the result 67.2 in your report, could you provide your training log for comparing difference?

thanks

I have encountered the same problem as you, have you already solved it?

Baiwenjing avatar Nov 13 '24 03:11 Baiwenjing

Same problem, i trained the hrmapnet_mapqr_nusc_r50_24ep_new and only get 66.68, compared to the author reported 72.6.

JunrQ avatar Nov 18 '24 15:11 JunrQ

Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently. hrmapnet_mapqr.log

Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy.

Dear author, I noticed that the log shows a result of 0.7079, but you mentioned that the result is around 66.0–67.0. Am I using the wrong metric?

JunrQ avatar Nov 18 '24 15:11 JunrQ

Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently. hrmapnet_mapqr.log Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy.

Dear author, I noticed that the log shows a result 0.7079, but why you say the result is around 66.0~67.0? Am i using the wrong metric?

I mean maptrv2-based version would get around 66~67. For mapqr-based version, it gets ~72.6 as in the provided log. It is very strange to get just 66.68 for mapqr-based version, maybe you can also upload the log.

fishmarch avatar Nov 18 '24 15:11 fishmarch

I haven't find any problem within the released codes. For current training strategy, the global map is generated for each epoch, then the model is trained with empty maps for early stage and with maps for late stage within each epoch. This seems not good, and may cause more randomness? Thus I'm trying to change the training strategy with some loaded maps. Recently, I don't have enough GPU, if the new strategy is tested well, I will update the codes later.

fishmarch avatar Nov 18 '24 15:11 fishmarch