brain_segmentation icon indicating copy to clipboard operation
brain_segmentation copied to clipboard

Testing the accuracy of segementation

Open build2create opened this issue 7 years ago • 14 comments

As defined in SegmentationModels.py function def get_dice_coef(self, test_img, label): requires labelled ground truth image label. I created the same using save_labels in brain_pipeline.py of size 240 X 240 . When the call to get_dice_coef(self, test_img, label): is made it gives value error for reshape due to this line imgs = io.imread(test_img).astype('float').reshape(5,240,240) .

This is because we make the call to predict_image in the first line of the function get_dice_coef()(reference:segmentation = self.predict_image(test_img)) but test_img is of size 240 X 240. def show_segmented_image(self, test_img, modality='t1c', show = False): is producing the slice of 240 X 240 test image. Isn't that used for calculating dice_coeff ?

build2create avatar Mar 24 '17 05:03 build2create

Yes that is used. Btw, the code has too many errors to work, so just don't blindly copy it. You may actually see my repo if you want some pointers. And it is similar to Nikki's but fully functional.

lazypoet avatar Mar 27 '17 06:03 lazypoet

@lazypoet saw your Metrics.py. Did you convert label images into png first? I am guessing this from your accuracy method. Actually I am pretty much stuck at this point. I have created test images using nikki's code and those are of size 172800(3X240X240). Now I am calling your accuracy function as accuracy(io.imread(test_image+prefix+str(i)+suffix).astype('float').reshape(3,240,240),io.imread(ground_truth+"0_"+str(i)+"L"+suffix)) It is giving the answer obviously in the range of 1-3. Mostly 2. something or 3 Is that correct? Plus I am also interested in using DSC method as written by you. I have the ground truth images but most of them black(from the code of this repository), so do I need to convert them again?

Please let me know what all steps need I take for test_images and ground truth images before calling your function. It already took a lot of time generating test images using nikki's code, will those images not work?

build2create avatar Mar 28 '17 08:03 build2create

@lazypoet I have tried something like this:

    test_image="/home/adminsters/Documents/SegmentedImages_1/"
    ground_truth="/home/adminsters/Documents/Labels/"
    prefix="seg_0_"
    suffix=".png"
    #load the model first
    model = SegmentationModel(loaded_model=True)
    sum=0
    for i in range(0,155):
	n=model.DSC(io.imread(test_image+prefix+str(i)+suffix).astype('float').reshape(3,240,240),io.imread(ground_truth+"0_"+str(i)+"L"+suffix))
	if(n>0):
	  sum=sum+n
    print sum

where DSC is

  def DSC(self,pred, orig_label):
     ''' Calculates Dice Score Coefficient
     INPUT: predicted, original labels
     OUTPUT: float
     '''
     TP = len(pred[((pred == 1) | (pred == 2) | (pred == 3) | (pred == 4)) & (pred == orig_label)])
     denom = len(pred[(pred == 1) | (pred == 2) | (pred == 3) | (pred == 4)]) + len(orig_label[(orig_label == 1) | (orig_label == 2) | (orig_label == 3) | (orig_label == 4)])
     if denom == 0:        
	return -1
     print 2.*TP/float(denom)
     return 2.*TP/float(denom)

build2create avatar Mar 28 '17 08:03 build2create

@build2create In any of the methods in Metrics.py , you have to pass a single slice each of the test image and respective ground truth as parameters, so as to get results between 0 and 1.

lazypoet avatar Mar 28 '17 10:03 lazypoet

@lazypoet I am using BRATS dataset. Suppose I convert into slices test images in the folder brats_tcia_pat123_0083 I get 155 slices of test images. Now I guess ground truth labels are in training folders the ones with substring 3more**. Am I correct? Where do I find the corresponding Ground truth?

Also according to the nikki's code those 155 slices are 3X240X240 and labels are 240 X240 png. Please elaborate on this. Sorry in advance for the trouble!

build2create avatar Mar 28 '17 11:03 build2create

So, the images are 3d with an axial view. Hence the 155 slices. Eg., slice 74 would represent the same slice but different modalities in all of the 3d brain images of a single patient, hence the 74th slice of the ground truth would represent the same slice's segmentation. Use your predicted image slice(i.e., the 74th slice of the predicted 3d brain image) and the corresponding gt. Each slice is 240x240, there are 4 modalities and a ground truth of each patient, hence 5x240x240

On Tue, Mar 28, 2017 at 4:35 PM, build2create [email protected] wrote:

@lazypoet https://github.com/lazypoet I am using BRATS dataset. Suppose I convert into slices test images in the folder brats_tcia_pat123_0083 I get 155 slices of test images. Now I guess ground truth labels are in training folders the ones with substring 3more**. Am I correct? Where do I find the corresponding Ground truth?

Also according to the nikki's code those 155 slices are 3X240X240 and labels are 240 X240 png. Please elaborate on this. Sorry in advance for the trouble!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/naldeborgh7575/brain_segmentation/issues/3#issuecomment-289736085, or mute the thread https://github.com/notifications/unsubscribe-auth/ATN71rTQOBGoYJrF0URZ4JTUfqMxuJ1uks5rqOlqgaJpZM4MnuXi .

lazypoet avatar Mar 28 '17 11:03 lazypoet

@lazypoet Yes, correct . What I am asking is where do I look for corresponding ground truth? Each folder in test dataset contains t1, t1c, flair and t2 modality. Ground truth is in training folder (one with substring 3more**) right?

build2create avatar Mar 28 '17 11:03 build2create

Yup, true.

On Tue, Mar 28, 2017 at 4:49 PM, build2create [email protected] wrote:

Yes, correct . What I am asking where do I look for corresponding ground truth? Each folder in test dataset contains t1, t1c, flair and t2 modality. Ground truth is in training folder (one with substring 3more**) right?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/naldeborgh7575/brain_segmentation/issues/3#issuecomment-289738932, or mute the thread https://github.com/notifications/unsubscribe-auth/ATN71p06Swe112RChuZ2CeGgAs5ZjDaRks5rqOzBgaJpZM4MnuXi .

lazypoet avatar Mar 28 '17 11:03 lazypoet

@lazypoet So that means I need to test the quality of the images which are in training folder( I mean use the images in the training folder for dice coef calculation) right? I was using the ones in testing folder so far...

build2create avatar Mar 28 '17 11:03 build2create

Assuming you are trying to predict images in your "testing folder", yes you need to use those images.

On Tue, Mar 28, 2017 at 5:07 PM, build2create [email protected] wrote:

@lazypoet https://github.com/lazypoet So that means I need to test the quality of the images which are in training folder( I mean use the images in the training folder for dice coef calculation) right? I was using the ones in testing folder so far...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/naldeborgh7575/brain_segmentation/issues/3#issuecomment-289742677, or mute the thread https://github.com/notifications/unsubscribe-auth/ATN71tuCHT_UkHFeep_JTtk6E_3Awozhks5rqPD_gaJpZM4MnuXi .

lazypoet avatar Mar 28 '17 11:03 lazypoet

@lazypoet Sir please be clear...I am not getting when it come to prediction use images in testing folder but when it comes to test the accuracy use the images in training folder.Is this right?

build2create avatar Mar 28 '17 11:03 build2create

Learn more about training, validation and testing data. Freshen up your concepts of basic machine learning, it'll help you a lot.

lazypoet avatar Mar 28 '17 11:03 lazypoet

Is there anyone who can help to run this code successfully? I am unable to find segmented images ? I don't know what will happen after running first file. I am not getting any error in code but i don't know where to find resulted images. can anyone pleaseeeee tell me which folder i have to create in advance ???

Jiyya avatar Jul 25 '17 13:07 Jiyya

I'm running the training phase of the code brain_tumor_segmentation_models.py,but i find the accuracy is very low,at about 0.2.I run the code with python2.7,and I have increased the number of patches to 100000 with augmentation. I guess the problem might be caused by labels,because the labels of the "more" file I got are all black, but i don't know the reason.What does your label look like?How should i do to increase the accuracy? Could you please help me to solve the problem?Thanks

tiantian-li avatar Apr 13 '18 09:04 tiantian-li