Question about the benchmark results of the denoising methods
Hi, thank you for making this great work!
I'd like to know If the denoising methods' evaluation method uses the model trained with denoised IMAGESET2 (not with IMAGESET1), is the performance still the same?
From the benchmark results of the denoising methods, the model does not perform very well on denoised images. I think there may be a domain gap between unseen denoised images (test) and clear images (train).
Hi,
Thanks for your interest in this research work.
I have not checked the object detection inference using the model trained with IMAGESET2. But I don't think the overall result/conclusion will change.
From the various object detection experiments I performed, I observed that the model does not perform well on denoised images. As you mentioned, the domain gap between the test and train set could be the reason. Still, I have also done object detection inference on the denoised BDD100k adverse weather images using the denoising GAN trained on the BDD100k dataset. However, the model performed better on noisy images than on denoised images.
I am still trying to determine why the inference on denoised images is worse than noisy images.
Hi, thanks for your quick reply.
I understand the model performs better on noisy images than on denoised images.
I have not checked the object detection inference using the model trained with IMAGESET2. But I don't think the overall result/conclusion will change.
I mean, maybe you could try to train the object detection model with denoised IMAGESET2 (not IMAGESET2) and evaluate it on denoised real-world datasets. It may give you some clues about why it performs much worse.
If the overall performance is better, then the domain gap exists between the test (denoised all-weather images) and train (clear images).
denoised IMAGESET2: clear images + denoised all-weather images (fog, rain, snow)
However, these are just my opinions. Maybe I need to be corrected.
I would appreciate it if you could provide the code for the benchmark (especially the denoising part) and instructions to prepare datasets and run the benchmark. I'm currently focusing on this research and would like to know the problem.
Hi,
Benchmarking of Denoising Methods: I used YOLO trained on clear weather images to benchmark the denoising algorithm. I denoised the images using the original implementation and then ran object detection validation using the YOLO-v5 library.
Testing the domain gap: I will run the validation on denoised IMAGESET2 and post the results by this week.
Hopefully, you can find something useful in my work.