15-1 Pascal-VOC Reproduce
Hi, I couldn't reproduce the results for 15-1 Pascal-VOC.
I'm running the script voc/plop_15-1-overlap.sh.
Since I have two GPUs with 24GB, I adjust the batch size to 12 and trained on 2 GPUs. This ensures the total batch size is 24 like your settings.
Here are the results
| 0-15 | 16-20 | all | |
|---|---|---|---|
| Reproduce | 63.41 | 19.25 | 52.90 |
| Reported | 70.60 | 23.70 | 59.40 |
The results are far lower than the results reported in the paper. Could you please advise?
Do you use the script in the current commit? And the results in the link.
I add the --overlap option (step 0) to the scripts for overlap setting just now.
I just add --overlap at Step 0. I got the final mIoU as 54%, which is still lower than the reported.
| 0-15 | 16-20 | all | |
|---|---|---|---|
| Reproduce | 64.93 | 18.72 | 53.92 |
| Reported | 70.60 | 23.70 | 59.40 |
Here is the log csv: 2022-10-16_voc_15-5s_RCIL_overlap.csv
Here is the bash script: RCIL_15-1-overlap.txt
Could you suggest some improvements?
Hi, @HieuPhan33 I trained the model on the setting of 15-1 overlapped and got slightly higher performance (59.9) than our reported results (59.4). I have uploaded the detailed results in the RCIL-15-1-overlap-results. You can also refer to the tensorboard logs in the RCIL-15-1-overlap-tensorboard-logs. For the running, I use pytorch=1.12.1 with cuda11.3. Before that, I use pytorch=1.3.1 for our reported results in our paper. So you'd better clone our repository again and re-train it. By the way, if you have any results, please let me know.
Hi, @HieuPhan33
I also clone this repository and re-train the 15-1 disjoint setting. The experimental results are in the results and logs, respectively. You can refer to them.
Hi, @HieuPhan33 Let me close this issue if you don't have any other questions. Feel free to reopen it if you need.