imed_vision icon indicating copy to clipboard operation
imed_vision copied to clipboard

DualLearning

Open yamyCD opened this issue 2 years ago • 11 comments

Hi, thanks for sharing, it's a very good study. But when I tried to replicate the code, I found that there was no DualLearning file in the project.

yamyCD avatar Dec 19 '23 12:12 yamyCD

Thank you for your issue, you only - - need to set the super_reso=True, sr_seg_fusion=True, before=True to use the D2SL as the framework. Dual learning is one work under study, we will release it in the future.

Qsingle avatar Dec 19 '23 13:12 Qsingle

Hi, the sharing is very good.

In the experimental results of lesion segmentation tasks in your paper, DeepLabV3+ performed the best.

But in the corresponding vessel_segmentation.json, should I set model_name=DeepLabV3+, super_reso=true, sr_seg_fusion=true?

Are there any other settings that need to be adjusted?

Lin-dashuai avatar Mar 01 '24 03:03 Lin-dashuai

Hi, the sharing is very good.

In the experimental results of lesion segmentation tasks in your paper, DeepLabV3+ performed the best.

But in the corresponding vessel_segmentation.json, should I set model_name=DeepLabV3+, super_reso=true, sr_seg_fusion=true?

Are there any other settings that need to be adjusted?

Thank you. If you want to use the deeplabv3+, I suggest you to set model_name=deeplabv3plus, super_reso=true,sr_seg_fusion=true block_name="resnet50", pretrain=true. Then you can use the DeepLabV3+ that the backbone is resnet50.

Qsingle avatar Mar 05 '24 08:03 Qsingle

What does 'before', 'divide' and 'fusion' mean in the code? I couldn't find any corresponding description in your paper.

In the JSON file, does 'image_size' refer to before or after super-resolution? If 'image_size = (512, 512) ' and 'upscale_rate = 4', does it mean that no matter what the resolution of the input image is, it's resized to 128x128 before entering the encoder, and the outputs from both decoder-seg and decoder-sr are 512x512?

Looking forward to your reply.

Lin-dashuai avatar Jul 23 '24 15:07 Lin-dashuai

What does 'before', 'divide' and 'fusion' mean in the code? I couldn't find any corresponding description in your paper.

In the JSON file, does 'image_size' refer to before or after super-resolution? If 'image_size = (512, 512) ' and 'upscale_rate = 4', does it mean that no matter what the resolution of the input image is, it's resized to 128x128 before entering the encoder, and the outputs from both decoder-seg and decoder-sr are 512x512?

Looking forward to your reply.

Thank you for your attention. The super_resolution means to allow the model to use the super-resolution. The before means using the shared feature extraction module before the task head, and the sr_seg_fusion is used to control the w/ or w/o of the shared feature extraction module. The image_size and upscale_rate depend on the code you used; if you use our code, you are right.

Qsingle avatar Aug 06 '24 07:08 Qsingle

Hi, thanks for sharing the good study. But when I tried to replicate the code based on Cityscape dataset, the error "IndexError: Target 21 is out of bounds " when calculating segloss. Is there any preprocessing required for the downloaded data? Looking forward to your reply.

123abcgit avatar Aug 14 '24 03:08 123abcgit

Hi, thanks for sharing the good study. But when I tried to replicate the code based on Cityscape dataset, the error "IndexError: Target 21 is out of bounds " when calculating segloss. Is there any preprocessing required for the downloaded data? Looking forward to your reply.

Thank you for your attention. You need to convert the data to 19 classes following the script, the trainId can help you to get the right class id.

Qsingle avatar Aug 16 '24 06:08 Qsingle

Hi, thanks for sharing the good study. But when I tried to replicate the code based on Cityscape dataset, the error "IndexError: Target 21 is out of bounds " when calculating segloss. Is there any preprocessing required for the downloaded data? Looking forward to your reply.

Thank you for your attention. You need to convert the data to 19 classes following the script, the trainId can help you to get the right class id.

Thanks very much for your reply. I have converted the data to 19 classes and run the code with settings below: "init_lr" : 0.01. "momentum" :0.9, "weight_decay": 5e-4, "epochs": 108, "lr_sche": "poly", "Image_s1ze":[512, 1024] "super_reso": true, "fusion": true, 'num_classes": 19, "gpu_index": "0” "model_name": "deeplabv3plus" "backbone":"resnet101" "channel": 3, "upscale_rate": 2, "num_workers": 4, "ckpt_dir": "./ckpt", "dataset":"cityscape' "batch_size": 2 Now it has been trained for 60 epochs (the total is 108), but the best iou is only 0.5790. I wonder if this is normal or there is anything wrong with my settings. Sorry to bother you again. Looking forward to your reply.

123abcgit avatar Aug 16 '24 07:08 123abcgit

Hi, thanks for sharing the good study. But when I tried to replicate the code based on Cityscape dataset, the error "IndexError: Target 21 is out of bounds " when calculating segloss. Is there any preprocessing required for the downloaded data? Looking forward to your reply.

Thank you for your attention. You need to convert the data to 19 classes following the script, the trainId can help you to get the right class id.

Thanks very much for your reply. I have converted the data to 19 classes and run the code with settings below: "init_lr" : 0.01. "momentum" :0.9, "weight_decay": 5e-4, "epochs": 108, "lr_sche": "poly", "Image_s1ze":[512, 1024] "super_reso": true, "fusion": true, 'num_classes": 19, "gpu_index": "0” "model_name": "deeplabv3plus" "backbone":"resnet101" "channel": 3, "upscale_rate": 2, "num_workers": 4, "ckpt_dir": "./ckpt", "dataset":"cityscape' "batch_size": 2 Now it has been trained for 60 epochs (the total is 108), but the best iou is only 0.5790. I wonder if this is normal or there is anything wrong with my settings. Sorry to bother you again. Looking forward to your reply.

Please make sure the pretrain weights is loaded.

Qsingle avatar Aug 17 '24 01:08 Qsingle

I meet a problem that 'ImportError: cannot import name 'DualLearning' from 'imed_vision.models.segmentation' (/workspace/workspace/imed_vision-main/scripts/imed_vision/models/segmentation/init.py)'

ucasyjz avatar Jan 02 '25 08:01 ucasyjz

I meet a problem that 'ImportError: cannot import name 'DualLearning' from 'imed_vision.models.segmentation' (/workspace/workspace/imed_vision-main/scripts/imed_vision/models/segmentation/init.py)'

This file is removed from the project; you need to follow the above instructions in this issue to use the DS2F.

Qsingle avatar Jan 02 '25 14:01 Qsingle