MICCAI-LITS2017 icon indicating copy to clipboard operation
MICCAI-LITS2017 copied to clipboard

Strange dice coefficient on volume-54.nii

Open mitiandi opened this issue 6 years ago • 23 comments

I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957). Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.

mitiandi avatar Oct 29 '18 12:10 mitiandi

And there is another question that confused me. It is that 'epoch=3000' is used to trained the network. But I found the network tended to convergence at a quite early time.(May be it seems before 1000 epoch.)

mitiandi avatar Oct 29 '18 13:10 mitiandi

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

ahmadmubashir avatar Oct 29 '18 14:10 ahmadmubashir

And from which file of code you are doing testing?

ahmadmubashir avatar Oct 29 '18 14:10 ahmadmubashir

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

mitiandi avatar Oct 30 '18 03:10 mitiandi

And from which file of code you are doing testing?

val.py

mitiandi avatar Oct 30 '18 03:10 mitiandi

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

I did the above steps. the first code gave me 256×256×n image. but it not gave me 256×256×48 3d patches. Could I get it manually? because the output of 'data_prepare/get_random_data.py', I understand. But the second code dont gave me exactly 256×256×48 3D patch. 'dataset/data_random.py'. is used in train_ds.py as from dataset.dataset_random import train_ds Is this make the 256×256×48 3D patches automatically or we will manually make these samples? one another issue I found that the size of volume after 'data_prepare/get_random_data.py', I obtained is 256×256×n and the size of its ground truth is 512×512×n, why? please help me. Thanks

ahmadmubashir avatar Oct 30 '18 10:10 ahmadmubashir

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

I did the above steps. the first code gave me 256×256×n image. but it not gave me 256×256×48 3d patches. Could I get it manually? because the output of 'data_prepare/get_random_data.py', I understand. But the second code dont gave me exactly 256×256×48 3D patch. 'dataset/data_random.py'. is used in train_ds.py as from dataset.dataset_random import train_ds Is this make the 256×256×48 3D patches automatically or we will manually make these samples? thank. please help me. The former is correct.The 3d patches were not saved. The data was automatically organized and load in the form of 256×256×48 patches , and its implementation is as followed, which is in 'dataset/data_random.py'.


    # 在slice平面内随机选取48张slice
    start_slice = random.randint(0, ct_array.shape[0] - size)
    end_slice = start_slice + size - 1

    ct_array = ct_array[start_slice:end_slice + 1, :, :]
    seg_array = seg_array[start_slice:end_slice + 1, :, :]

mitiandi avatar Oct 30 '18 11:10 mitiandi

Hi,@mitiandi,@ahmadmubashir Is this program start with using data_prepare/get_random_data.py to get pre-process train data and dataset/data_random.py to extract 48 continuous slices,then i can use python train_ds.py directly to start train,looking forward to your reply! Best, Ming

zz10001 avatar Jul 24 '19 08:07 zz10001

In the truth, i have forgot the details because there are a long time since then. But it seems right. what you need to do is just to change the data path to yours. Good luck~

---Original--- From: "zz10001"[email protected] Date: Wed, Jul 24, 2019 16:20 PM To: "assassint2017/MICCAI-LITS2017"[email protected]; Cc: "Mention"[email protected];"mitiandi"[email protected]; Subject: Re: [assassint2017/MICCAI-LITS2017] Strange dice coefficient on volume-54.nii (#5)

Hi,@mitiandi,@ahmadmubashir Is this program start with using data_prepare/get_random_data.py to get pre-process train data and dataset/data_random.py to extract 48 continuous slices,then i can use python train_ds.py directly to start train,looking forward to your reply! Best, Ming

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

mitiandi avatar Jul 24 '19 08:07 mitiandi

In the truth, i have forgot the details because there are a long time since then. But it seems right. what you need to do is just to change the data path to yours. Good luck~

Thanks for your kind help,may you have a good day!

zz10001 avatar Jul 24 '19 08:07 zz10001

I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957). Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.

hi, have you found the reason Or how did you solve it

Oct6ber avatar Aug 01 '20 12:08 Oct6ber

hi, have you found the reason Or how did you solve it

You can solve it by https://github.com/assassint2017/MICCAI-LITS2017/issues/6#issue-375503582.

zz10001 avatar Aug 01 '20 12:08 zz10001

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

Thank you very much

Oct6ber avatar Aug 01 '20 12:08 Oct6ber

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

Oct6ber avatar Aug 02 '20 08:08 Oct6ber

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image

zz10001 avatar Aug 02 '20 10:08 zz10001

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image

Thank you very much

Oct6ber avatar Aug 02 '20 14:08 Oct6ber

The input of DialResUnet is 512512, but the output is 10241024. Shouldn't the divided input and output be the same size? Do you know why this is?

lcl180 avatar May 11 '21 14:05 lcl180

What is the code operation procedure of this project, can you share it? Thank you

lcl180 avatar May 12 '21 12:05 lcl180

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image JI}2`V) YLF1AL5} A1ZENI hi,why my answers like these?The DICE and jacard were something wrong.

life-8079 avatar Mar 24 '22 02:03 life-8079

hi,why my answers like these?The DICE and jacard were something wrong.

Have you visualed the predict by ITK-SNAP or other viewer Software? Maybe you can see the predict first

zz10001 avatar Mar 24 '22 16:03 zz10001

hi,why my answers like these?The DICE and jacard were something wrong.

Have you visualed the predict by ITK-SNAP or other viewer Software? Maybe you can see the predict first 9(T0PK_CPR2TOGLS18Z_E Hi, the result is like this. The image‘s background is 1 and the liver is 0. Could you help me?

life-8079 avatar Mar 25 '22 07:03 life-8079

Hi, the result is like this. The image‘s background is 1 and the liver is 0. Could you help me?

It seems easy to exchange color for Liver and Tumor, You just need to Negate the Nii you predicted.

zz10001 avatar Mar 25 '22 18:03 zz10001

Hi, @zz10001, @life-8079 , I'm wondering why the para.size is 48? image After I change the para.size to 32, and the para.slice_thickness from 1 to 4. I found the kiunet_org can't work. I'd appreciate it if you can give some help! image

zhouyizhuo avatar Dec 03 '23 09:12 zhouyizhuo