HAT
HAT copied to clipboard
SR replication results mismatch reported values on certain datasets
Thank you very much for your marvelous SR work and opensourcing! When training HAT for classic SISR results, I found a slight mismatch of results:
| SRx2 | Set5 | Set14 | B100 | Urban100 | M109 |
|---|---|---|---|---|---|
| Reported | 38.63/0.9630 | 34.86/0.9274 | 32.62/0.9053 | 34.45/0.9466 | 40.26/0.9809 |
| Replication | 38.61/0.9630 | 34.77/0.9266 | 32.61/0.9053 | 34.45/0.9465 | 40.23/0.9806 |
| SRx4 | Set5 | Set14 | B100 | Urban100 | M109 |
|---|---|---|---|---|---|
| Reported | 33.04/0.9056 | 29.23/0.7973 | 28.00/0.7517 | 27.97/0.8368 | 32.48/0.9292 |
| Replication | 33.00/0.9053 | 29.18/0.7967 | 27.99/0.7515 | 27.97/0.8368 | 32.44/0.9292 |
The replication is done with standard training from scratch setting, i.e. using config train_HAT_SRx2_from_scratch.yml and train_HAT_SRx4_from_scratch.yml.
While replications on other datasets yield similar results to reported values in the paper, I am a bit puzzled by the mismatch of Set14 and Manga109 datasets. I am wondering about the cause of the mismatch: is that a normal fluctuation? Or there might be an inconsistency with test dataset versions? Thanks again!
@sairights Hi, I also met reproduce problems. What's your training settings?
#49
@yumath I used the official code of HAT and the official train-from-scratch setting on DF2K. I also mod2/3/4 for test images, which is what #49 suggests. I think this might be some fluctuation.
@sairights maybe fluctuation about hyper-parameters: iter_num and batch_size? https://github.com/XPixelGroup/HAT/issues/26#issuecomment-1288154862
@yumath Well I used 8 GPUs and kept iter_num and batch_size identically the same with the official setting. Really weird to see that there exists a large gap on Set14 between replications and reported values...