open-solution-mapping-challenge icon indicating copy to clipboard operation
open-solution-mapping-challenge copied to clipboard

how to visulization the result,if there is a tool can do it ?

Open sanersbug opened this issue 6 years ago • 33 comments

the result is a submission.json file , how to visualization it ?

sanersbug avatar Jul 25 '18 00:07 sanersbug

I hope this help:

from pycocotools.coco import COCO
from pycocotools import mask as cocomask
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import pylab
import random
import json
import os
from PIL import Image
import numpy.ma as npm
from skimage import measure,draw

def delete_zero_bfstr(ss):
    for i in range(len(ss)):
        if ss[i]=='0':
            continue
        else:
            ss=ss[i:]
            break
    return ss

def find_id_ann(ann,imgid):
    l=[]
    for anni in ann:
        if str(anni['image_id'])==imgid:
            l.append(anni)
    return l

with open('E:/torch/open-solution-mapping-challenge/tmp/prediction.json','r') as f:
    prediction_json=json.load(f)

testimages_dir='E:/torch/open-solution-mapping-challenge/data/test_images'
testimages_list=os.listdir(testimages_dir)

for image_id in testimages_list:
    img_filepath=os.path.join(testimages_dir,image_id)
    img=mpimg.imread(img_filepath)
    img_real=mpimg.imread(img_filepath)
    mask=np.zeros(img.shape)[:,:,0]
    img_id=delete_zero_bfstr(image_id.split('.')[0])
    img_annlist=find_id_ann(prediction_json,img_id)
    for ann in img_annlist:
        m=cocomask.decode(ann['segmentation'])
        mask+=m
    mask=mask>0
    contours = measure.find_contours(mask, 0.5)
    img.flags.writeable=True
    img[:,:,0][mask]=255
    plt.figure()
    plt.subplot(1,2,1)
    plt.title('origin image')
    plt.imshow(img_real)
    plt.axis('off')
    plt.subplot(1,2,2)
    plt.title('masked image')
    plt.imshow(img)
    for n, contour in enumerate(contours):
        plt.plot(contour[:, 1], contour[:, 0], color='red',linewidth=1)
    plt.axis('off') 
    plt.show()

It is best to execute under the python compiler. If you want to display one by one, please set a breakpoint in the for loop. Please rewrite the path,the prediction.json is same as submission.json.And rewrite the test_images path.

And now you get a submission.json.Can you tell me how big is your training set and val set?And how many epochs have you trained?

Is there a big difference between the results of the cross-validation in the training process and the results in the evaluation? I have encountered such a problem now, I hope you can tell me your situation.

newcoder0531 avatar Jul 25 '18 01:07 newcoder0531

@newcoder0531 wow!!!,thank you very much !!!!! After i complete it , i'll answer your question, just wait.

sanersbug avatar Jul 25 '18 01:07 sanersbug

@newcoder0531 I have just begun to do this project a few days ago. This time i also have many problems . I use default parameters to complete the train

  1. I use the data which is included in 'annotation-small.json', the epoch also is default (100)
  2. My result is very bad , i don't know how to enhance the effect . the validation accuracy is only about 0.7... res

sanersbug avatar Jul 25 '18 04:07 sanersbug

@sanersbug If this is the result of the submission.json,then you have not encountered the same problem as me. I think you result is right.The reason for the inaccuracy is learning rate and train set.You should use the all train image to fit the network and gradually reduce the learning rate in naptune.yaml.In order to do that,you should load your 'best.torch' in checkpoint/unet path under you experiment_dir. the issue #160 can help you to load model. And more what's your computer system?Linux or Windows?I want to collect some information to solve my current problem.

newcoder0531 avatar Jul 25 '18 05:07 newcoder0531

@newcoder0531 OK ,thank you . My computer system is Linux (Ubuntu 16.04 CUDA8.0)

sanersbug avatar Jul 25 '18 05:07 sanersbug

@newcoder0531 Another question , do you know how to train with own data? I have make the label which are 'tif' files, but i don't now how to produce the 'annotation.json' file, if there is any tools to do this ? thanks a lot!

sanersbug avatar Jul 26 '18 02:07 sanersbug

@sanersbug Sorry,I don't have experience in this area.I think it's hard for you to complete the annotation yourself. Maybe you can find help from these links below. http://cocodataset.org/#format-data https://www.crowdai.org/challenges/mapping-challenge Good luck!

newcoder0531 avatar Jul 26 '18 03:07 newcoder0531

@newcoder0531 thank you very much!

sanersbug avatar Jul 26 '18 03:07 sanersbug

@sanersbug This project supports image (png) labels but you are welcome to change the loaders to suit your needs.

It should be modified here:

https://github.com/neptune-ml/open-solution-mapping-challenge/blob/master/src/loaders.py#L40-L42 I believe

jakubczakon avatar Jul 26 '18 10:07 jakubczakon

@jakubczakon OK,Thank you, I will try it!

sanersbug avatar Jul 26 '18 14:07 sanersbug

@jakubczakon Sorry to disturb again , i have tried to train with my own data , but it's too hard for me . I even don't know where and how to modify the code you have supply, I have saw the code with too much time , but i have no idea. Could you give me some hints?

sanersbug avatar Jul 30 '18 14:07 sanersbug

image i am getting this error,what to do?

animeshsahu80 avatar Jul 25 '19 00:07 animeshsahu80

@animeshsahu80 looking at this issue https://github.com/pandas-dev/pandas/issues/24839 it may be due to numpy/pandas version. Which versions if those libraries are you using?

jakubczakon avatar Jul 25 '19 12:07 jakubczakon

image image

animeshsahu80 avatar Jul 25 '19 13:07 animeshsahu80

also when i try to print(img.flags) i get this image

animeshsahu80 avatar Jul 25 '19 13:07 animeshsahu80

also just to check whether code is working on my machine i just ran 1 epoch, will there be any kind of segmentation after running predection?

animeshsahu80 avatar Jul 25 '19 13:07 animeshsahu80

Which operating system are you using?

And I don't think there will be any reasonable predictions after 1 epoch.

jakubczakon avatar Jul 25 '19 13:07 jakubczakon

ubuntu 16.04

animeshsahu80 avatar Jul 25 '19 13:07 animeshsahu80

That is unexpected as I am working on this system and it runs just fine. It seems that there could be a fix with changing the

img = img.DO_SOMETHING()

to

img = img.copy().DO_SOMETHING()

Can you point to the exact place in the code where it fails? Is it https://github.com/neptune-ml/open-solution-mapping-challenge/blob/master/src/utils.py#L277 ?

jakubczakon avatar Jul 25 '19 14:07 jakubczakon

`with open('E:/torch/open-solution-mapping-challenge/tmp/prediction.json','r') as f: prediction_json=json.load(f)

testimages_dir='E:/torch/open-solution-mapping-challenge/data/test_images' testimages_list=os.listdir(testimages_dir)

for image_id in testimages_list: img_filepath=os.path.join(testimages_dir,image_id) img=mpimg.imread(img_filepath) img_real=mpimg.imread(img_filepath) mask=np.zeros(img.shape)[:,:,0] img_id=delete_zero_bfstr(image_id.split('.')[0]) img_annlist=find_id_ann(prediction_json,img_id) for ann in img_annlist: m=cocomask.decode(ann['segmentation']) mask+=m mask=mask>0 contours = measure.find_contours(mask, 0.5) img.flags.writeable=True img[:,:,0][mask]=255 plt.figure() plt.subplot(1,2,1) plt.title('origin image') plt.imshow(img_real) plt.axis('off') plt.subplot(1,2,2) plt.title('masked image') plt.imshow(img) for n, contour in enumerate(contours): plt.plot(contour[:, 1], contour[:, 0], color='red',linewidth=1) plt.axis('off') plt.show()`

i am using this code mentioned above , img.flags.writeable=True and i am getting error in this line

animeshsahu80 avatar Jul 25 '19 14:07 animeshsahu80

Instead of

img.flags.writebale=True

can you try this

img = img.copy() 
img[:,:,0][mask]=255

or this

img_copy = img.copy() 
img_copy[:,:,0][mask]=255

jakubczakon avatar Jul 25 '19 14:07 jakubczakon

Also what is mping in img=mpimg.imread(img_filepath) ? I usually load with PIL.Image.open(img_filepath, 'r') can you try loading with PIL instead?

jakubczakon avatar Jul 25 '19 14:07 jakubczakon

Thanks!! sure i'll try with PIL

animeshsahu80 avatar Jul 25 '19 16:07 animeshsahu80

thanks, PIL and .copy() method both are working, but i am getting an output of image is this because of of only 1 epoch

animeshsahu80 avatar Jul 25 '19 18:07 animeshsahu80

I think so, basically, if your mAP is still at zero during training I wouldn't expect anything else than emty predictions.

jakubczakon avatar Jul 26 '19 08:07 jakubczakon

Hi, I see that all evaluation steps in the code run on validation image set. Is there a reason why we are not using test_images and how can we add them into evaluation? Thanks

lunaalvarez avatar Apr 29 '20 11:04 lunaalvarez

Hi @lunaalvarez,

For test images (and any other folders), it is prepared in a following way:

python main.py predict_on_dir \
--pipeline_name unet_tta_scoring_model \
--chunk_size 1000 \
--dir_path path/to/inference_directory \
--prediction_path path/to/predictions.json

We have this predict_on_dir command. Once that is done, you'll get the predictions.json which you can calculate values on. You can also exploration notebook to inspect the results.

I hope this helps!

jakubczakon avatar Apr 29 '20 11:04 jakubczakon

Ooops, quite obvious really. Dont know how i missed it. Thanks for the quick reply :)

lunaalvarez avatar Apr 29 '20 14:04 lunaalvarez

Hi, @jakubczakon . I got the predication based on default test_images (all the training parameters are default, with annotation-small.json).

 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.729
 Average Precision  (AP) @[ IoU=0.50      | area= small | maxDets=100 ] = 0.102
 Average Precision  (AP) @[ IoU=0.50      | area= large | maxDets=100 ] = 0.825
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.796
 Average Recall     (AR) @[ IoU=0.50      | area= small | maxDets=100 ] = 0.353
 Average Recall     (AR) @[ IoU=0.50      | area= large | maxDets=100 ] = 0.882
2020-08-04 11-21-15 mapping-challenge >>> Mean precision on validation is 0.7294194291809457
2020-08-04 11-21-15 mapping-challenge >>> Mean recall on validation is 0.7956491552881277

And follow the code above to visualization, but I got weird mask pics. image

It seems this learned something strange, these masks don't belong to the original picture. Did this visualization procedure go wrong? I mean, the precision is near to 0.73 and the result shouldn't like this. What do you think?

willhunger avatar Aug 04 '20 07:08 willhunger

It seems like some image indexing in the prediction visualization is wrong -> it clearly learned to find them.

Did you run the exploration notebook from the repo to create it? -> Go to notebook

I just want to make sure that we are on the same page with what is not working :)

jakubczakon avatar Aug 04 '20 07:08 jakubczakon