image-restoration-sde
image-restoration-sde copied to clipboard
need dehazing pre-trained model
Hi,
can you please share pre-trained dehazing model? the existing latent-dehazing.pth is not generating the results as you have shown in the document.
Thanks
Great, thanks let me try it, much appreciated your help.
Hi,
When I am running python test.py -opt=options/dehazing/test/nasde.yml using codes\config\latent-dehazing\test.py, and latent-reffusion-dehazing.pth, I am getting bluish picture, and when I am using latent-dehazing.pth then getting totally different output, but not like you have presented as Non-Homogeneous dehazing, you can find the results here . I am new to this, it will be great if I may get your assistance, much appreciated the help.
Thanks
Hi,
The provided U-Net latent model is only trained for the NTIRE HR Non-Homogeneous dehazing dataset. If you want to use it for other haze datasets (such as SOTS indoor or NH-HAZE), you need to retrain the U-Net model for latent-refusion in this directory.
Or you can just download the images from the NTIRE challenge.
Thanks, but when I tried U-Net, I got multiple error, when fixed one got other, and some how I managed to run it, but couldn't got the expected results, the deraining algorithm works too good but can't manage to run dehazing algorithm (unet-latent), is the code up-to-date?
Thanks again and sorry for being pain.
Hi, are you meet the problems in training the u-net or testing the latent-Refusion model? I could write a separate paragraph to show how to train and test the latent-Refusion.
Now I updated the code for latent-Refusion, hope it works!
Yes, the code works fine now, thanks, but it is not dehazing the image, can you please share any image that you have tested, so that I may run it and test it on my system? for Unet-latent, I am using pretrained weight that is latent-dehazing.pth, but the input and output image looks identical, moreover I notice that the deraining model works great for many random images downloaded from google as long as the image has better resolution, if I used same image with lower resolution, it was not deraining.
My main target is to get dehaze model running.
Thanks, really appreciate your help
Great! But the unet-latent is only used to compress the image thus the input and output are almost the same. If you want to test the dehazing results, you should get into the "latent-dehazing" directory and change the dataset path and pre-trained model paths.
Here are some validation images you could have a try.
Best.
Moreover, note that the performance of our model highly depends on the training dataset. We only trained the model on the Rain100H dataset thus it makes sense that it didn't work very well on lower-resolution raining images. But you can easily retrain the model on your own dataset to get better performance.
Thanks, When I used Latent-dehazing, the output is purple colored image, which does not look like input image please check here,
Hi, you can test images from the HR dehazing dataset: https://codalab.lisn.upsaclay.fr/my/datasets/download/14df5793-f1c2-4f32-aaa7-d60b7d6dd6be
And if you want to test the indoor haze images, I can also provide another IR-SDE code and pretrained model for indoor dehazing.
Wow, Works great on these images, Thanks.
Just a suggestion, since your code is compatible on Linux, so I had to make some minor adjustments in it to make it work on windows, so if you update line number 10, 11 and 67 of options.py file, and line number 40 and 41 of test.py file, then it will work on both Linux and windows. you can add condition to check OS and use commands accordingly.
That's just a suggestion, I hope you won't mind.
Thanks, you have been great help.
And if you want to test the indoor haze images, I can also provide another IR-SDE code and pretrained model for indoor dehazing.
yes please, that will be great, much appreciated
Thank you for your suggestions! Since I don't have a Windows computer to test the code, I would be happy to add your comments to the readme file.
I will also provide the code for indoor dehazing later.
Hi,Did you train the dehazing task using only 40 hazy/haze-free pairs? This is too incredible. I would like to reproduce this experiment, can you provide a link to download the training data? thans!
Hi, you can test images from the HR dehazing dataset: https://codalab.lisn.upsaclay.fr/my/datasets/download/14df5793-f1c2-4f32-aaa7-d60b7d6dd6be
Hi,Did you train the dehazing task using only 40 hazy/haze-free pairs? This is too incredible. I would like to reproduce this experiment, can you provide a link to download the training data? thanks!
Sure. Here is the challenge website in which you can download the dehazing dataset (but you need to register the challenge first): https://codalab.lisn.upsaclay.fr/competitions/10216.
Sure. Here is the challenge website in which you can download the dehazing dataset (but you need to register the challenge first): https://codalab.lisn.upsaclay.fr/competitions/10216.
I have applied to participate in the challenge, but have not got a permission. Maybe the challenge is over and no one is in charge of it. Can you provide the training set, if possible?
Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.
Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.
If you can provide the data that would be great, here's my email: [email protected]
Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.
another question,in the validation set of NonHomogeneous Dehazing, there is no haze-free data, so how did you set up the validation dataset
Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.
another question,in the validation set of NonHomogeneous Dehazing, there is no haze-free data, so how did you set up the validation dataset
Hi, we just divided 5 image pairs from the training data as the validation dataset. Thus actually we use 35 and 5 pairs as training and validation datasets, respectively.
Ok, I guess the dataset would be released later. But I can send you the training and testing data through email.
another question,in the validation set of NonHomogeneous Dehazing, there is no haze-free data, so how did you set up the validation dataset
Hi, we just divided 5 image pairs from the training data as the validation dataset. Thus actually we use 35 and 5 pairs as training and validation datasets, respectively.
When I trained the unet-latent using dehazing dataset, I found that the training processing was very slow and the gpu utilization was very low because a lot of time was spent on loading data. How long did it take to reach about 300000 iterations during your training? How did you solve this problem during training?
Hi, maybe you can pre-crop the images to a cropped training dataset with smaller image sizes. An example code can be found in this script.
Hi, maybe you can pre-crop the images to a cropped training dataset with smaller image sizes. An example code can be found in this script.
Hello, did the author pre-crop the image during training?
Hi, maybe you can pre-crop the images to a cropped training dataset with smaller image sizes. An example code can be found in this script.
Hello, did the author pre-crop the image during training?
Yes.
@azlanqazi2012 Sorry to bother you, I would like to ask what changes need to be made to these codes on windows. thanks a lot.
很抱歉打扰,我想请问这个问题应该怎么解决