DRENet model training on degraded images
Hi there,
I am not able to train the DRENet model using the degraded images. I have provided the degraded images path in the ship.yaml file as follows: train: /content/gdrive/MyDrive/DRENet_new/DRENet/LEVIR_ship_dataset_full/train/degrade/ val: /content/gdrive/MyDrive/DRENet_new/DRENet/LEVIR_ship_dataset_full/val/degrade/
I get the following error message.
AssertionError: train: No labels in /content/gdrive/MyDrive/DRENet_new/DRENet/LEVIR_ship_dataset_full/train/degrade.cache. Can not train without labels.
Please advise if I am providing the incorrect file path. Looking forward to hearing from you. Thanks.
Hi @namita-agarwal ,
-
Is your data structure consistent with here?
-
And the dataset path in the
ship.yamlmay need to target to images subfolder, not degrade.
You can recheck the above steps, delete all the .cache files, and rerun.
Thanks @WindVChen for your reply.
I have the following two queries now.
- I am wondering if I need to change the degraded images file names also (e.g. degraded_img_1.png) according to
── train/val/test
├── images
├── img_1.png
├── img_2.png
├── ...
├── degrade
# images processed by Selective Degradation (refer to our paper for detals) ├── degraded_img_1.png ├── degraded_img_2.png ├── ... ├── labels ├── label_1.txt ├── label_2.txt ├── ... - If I target the dataset path in the ship.yaml to images subfolder, not degrade folder according to the above data structure, wouldn't it take the normal (non-degraded) images for training the model because images subfolder has non-degraded images only?
Thanks in advance.
- The files in different subfolders need to share the same names.
- The program will search for the degraded images in the degrade subfolder. You can refer to the code here
You can take a look at our LEVIR-Ship dataset, which may help to understand the dataset format.
@WindVChen many thanks for this! I tried your solution to train the DRENet model by keeping the degraded images in the degrade folder and non-degraded images in the images folder. But I am bit confused on this. In this way, are we training the DRENet model on degraded or non-degraded images?
Can we train the DRENet model using non-degraded images only? If yes, how so?
Thanks in advance.
Hi there,
The DRENet will leverage both degraded images and non-degraded images for training. It is recommended to have a look at the design details in our paper.
@WindVChen many thanks for your reply. I got it now completely.
I am wondering whether we can change the severity of blurriness to images using the DegradeGenerate.py? If yes, could you please advise how?
Thanks in advance.
Yes, the Degraded function and its severity can be changed according to your needs. You can see how we determine the degraded function and its parameters in https://github.com/WindVChen/DRENet/issues/3 and https://github.com/WindVChen/DRENet/issues/4#issuecomment-1381286046. I think these may help a lot.
@WindVChen thanks a million. It helped me a lot!