squeezeDet icon indicating copy to clipboard operation
squeezeDet copied to clipboard

Evaluate original squeezeDetPlus model on KITTI Benchmark

Open cksl opened this issue 8 years ago • 12 comments

Your work impress me with high speed and better performance, before training on my own data, I just run your squeezeDetPlus model (model.ckpt-95000) on the KITTI test set (7518 images). However , the result is not good.

I see the mAP of pedestrian in the Paper (Tablet 2) is:

  • 81.4% on Easy, 68.5% on Hard

The result I run is:

(not identical to the KITTI (official) server groundtruth, refer to my following answer to dojoscan )

  • 45.69% on Easy, 38.39% on Hard.

To be honest , I trust your result, there must be something wrong in the code. I just modify the demo.py, Here is my demo.py . All other codes are the same with yours. I have checked my demo.py over and over again. I doubt there are bugs in the original demo.py , Could you figure it out? Hope you can help with that!

def image_demo():
  """Detect image."""

  with tf.Graph().as_default():
    # Load model
    mc = kitti_squeezeDetPlus_config()
    mc.BATCH_SIZE = 1
    # model parameters will be restored from checkpoint
    mc.LOAD_PRETRAINED_MODEL = False
    model = SqueezeDetPlus(mc, FLAGS.gpu)
    saver = tf.train.Saver(model.model_params)

    #  set model path
    FLAGS.checkpoint = r'/home/project/HumanDetection/squeezeDet_github/models/squeezeDetPlus/model.ckpt-95000'
    # set test data path
    basic_image_path = r'/home/dataSet/kitti/ori_data/left_image/testing/image_2/'
    list_path = r'/home/dataSet/kitti/ori_data/left_image/testing/test_list.txt' 
    write_result_path =r'/home/dataSet/kitti/ori_data/left_image/testing/run_out/'
    
    with open(list_path,'rt') as F_read_list:
        image_list_name = [x.strip() for x in F_read_list.readlines()]


    print ('image numbers:  ', len(image_list_name) )

    count_num = 0
    pedestrian_index = int(1)  ## for pedestrian index
    keep_score = 0.05  #  
    
    # write file format
    default_str_1 = 'Pedestrian -1 -1 -10'
    default_str_2 = '-1 -1 -1 -1000 -1000 -1000 -10'

    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
    	saver.restore(sess, FLAGS.checkpoint)

    	for file_name in image_list_name:
    		read_full_name = basic_image_path + file_name
    		im = cv2.imread(read_full_name)
    		if im is None:
    			print (file_name, ' is empty!')
    			continue
    		im = im.astype(np.float32, copy=False)
    		im = cv2.resize(im, (mc.IMAGE_WIDTH, mc.IMAGE_HEIGHT))
    		input_image = im - mc.BGR_MEANS   		     
                
                # Detect 
	        det_boxes, det_probs, det_class = sess.run(
	        	[model.det_boxes, model.det_probs, model.det_class],
	        	feed_dict={model.image_input:[input_image], model.keep_prob: 1.0})
	        
                # NMS  Filter
	        final_boxes, final_probs, final_class = model.filter_prediction(
	        	det_boxes[0], det_probs[0], det_class[0])
	        
	        ##  only keep high probablity pedestrian
	        keep_idx    = [idx for idx in range(len(final_probs)) \
	        if final_probs[idx] > keep_score]

	        final_boxes = [final_boxes[idx] for idx in keep_idx]
	        final_probs = [final_probs[idx] for idx in keep_idx]
	        final_class = [final_class[idx] for idx in keep_idx]

	        # -------------- write files -----------------------
	        F_w_one_by_one = open(write_result_path + file_name.replace('png', 'txt'), 'wt')
	        rect_num = final_class.count(pedestrian_index)
	        
                print ('count: ', count_num)
        	count_num+=1
        	
	        if rect_num==0:
                        F_w_one_by_one.close()
	        	continue

        	goal_index = [idx for idx,value in enumerate(final_class) if value==pedestrian_index]

        	for kk in goal_index:
        		box = final_boxes[kk]
        		
                        xmin = box[0] - box[2]/2.0
        		ymin = box[1] - box[3]/2.0
        		xmax = box[0] + box[2]/2.0
        		ymax = box[1] + box[3]/2.0
        		

        		
        		line_2 = default_str_1 + ' '+ str(xmin) + ' '+ str(ymin) + ' '+ \
				str(xmax) + ' '+ str(ymax)+' '+ default_str_2 +' ' + str(final_probs[kk])+'\n'

        		F_w_one_by_one.write(line_2)
	         
	        F_w_one_by_one.close()


def main(argv=None):
    image_demo()


if __name__ == '__main__':
    tf.app.run()

cksl avatar Mar 21 '17 09:03 cksl

Thanks for your question. I'll look into it. @cksl

BichenWuUCB avatar Mar 21 '17 18:03 BichenWuUCB

@BichenWuUCB I found one possible bug in the demo.py , I modify my demo.py above, which seven lines of code are inserted( commented 【new codes】). The new codes are in the blow. The new mAP on KITTI I run is:

  • 60.11% on easy, 51.41% on hard

The result turns better, however, there are still gaps with your paper (Table 2)

  • 81.4% on easy, 68.5% on hard

I think the change reason is that: some input image is 1224x370, which is resized to 1242x375. So the results should be restored to original scale.

There must be other problems , hope you can look into that carefully . Thank you, Bichen!

def image_demo():
  """Detect image."""

  with tf.Graph().as_default():
    # Load model
    mc = kitti_squeezeDetPlus_config()
    mc.BATCH_SIZE = 1
    # model parameters will be restored from checkpoint
    mc.LOAD_PRETRAINED_MODEL = False
    model = SqueezeDetPlus(mc, FLAGS.gpu)
    saver = tf.train.Saver(model.model_params)

    #  set model path
    FLAGS.checkpoint = r'/home/project/HumanDetection/squeezeDet_github/models/squeezeDetPlus/model.ckpt-95000'
    # set test data path
    basic_image_path = r'/home/dataSet/kitti/ori_data/left_image/testing/image_2/'
    list_path = r'/home/dataSet/kitti/ori_data/left_image/testing/test_list.txt' 
    write_result_path =r'/home/dataSet/kitti/ori_data/left_image/testing/run_out/'
    
    with open(list_path,'rt') as F_read_list:
        image_list_name = [x.strip() for x in F_read_list.readlines()]


    print ('image numbers:  ', len(image_list_name) )

    count_num = 0
    pedestrian_index = int(1)  ## for pedestrian index
    keep_score = 0.05  #  
    
    # write file format
    default_str_1 = 'Pedestrian -1 -1 -10'
    default_str_2 = '-1 -1 -1 -1000 -1000 -1000 -10'

    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
    	saver.restore(sess, FLAGS.checkpoint)

    	for file_name in image_list_name:
    		read_full_name = basic_image_path + file_name
    		im = cv2.imread(read_full_name)
    		if im is None:
    			print (file_name, ' is empty!')
    			continue
                
               # 【【【【【【【【【new codes】】】】】】】】】
                ori_height, ori_width, _ = im.shape
                x_scale = float(ori_width) / mc.IMAGE_WIDTH
                y_scale = float(ori_height) / mc.IMAGE_HEIGHT
    		
                im = im.astype(np.float32, copy=False)
    		im = cv2.resize(im, (mc.IMAGE_WIDTH, mc.IMAGE_HEIGHT))
    		input_image = im - mc.BGR_MEANS   		     
                
                # Detect 
	        det_boxes, det_probs, det_class = sess.run(
	        	[model.det_boxes, model.det_probs, model.det_class],
	        	feed_dict={model.image_input:[input_image], model.keep_prob: 1.0})
	        
                # NMS  Filter
	        final_boxes, final_probs, final_class = model.filter_prediction(
	        	det_boxes[0], det_probs[0], det_class[0])
	        
	        ##  only keep high probablity pedestrian
	        keep_idx    = [idx for idx in range(len(final_probs)) \
	        if final_probs[idx] > keep_score]

	        final_boxes = [final_boxes[idx] for idx in keep_idx]
	        final_probs = [final_probs[idx] for idx in keep_idx]
	        final_class = [final_class[idx] for idx in keep_idx]

	        # -------------- write files -----------------------
	        F_w_one_by_one = open(write_result_path + file_name.replace('png', 'txt'), 'wt')
	        rect_num = final_class.count(pedestrian_index)
	        
                print ('count: ', count_num)
        	count_num+=1
        	
	        if rect_num==0:
                        F_w_one_by_one.close()
	        	continue

        	goal_index = [idx for idx,value in enumerate(final_class) if value==pedestrian_index]

        	for kk in goal_index:
        		box = final_boxes[kk]
        		
                        xmin = box[0] - box[2]/2.0
        		ymin = box[1] - box[3]/2.0
        		xmax = box[0] + box[2]/2.0
        		ymax = box[1] + box[3]/2.0
        		
                       # 【【【【【【【【【new codes】】】】】】】】】
                        xmin*=x_scale
                        ymin*=y_scale
                        xmax*=x_scale
                        ymax*=y_scale
        		
        		line_2 = default_str_1 + ' '+ str(xmin) + ' '+ str(ymin) + ' '+ \
				str(xmax) + ' '+ str(ymax)+' '+ default_str_2 +' ' + str(final_probs[kk])+'\n'

        		F_w_one_by_one.write(line_2)
	         
	        F_w_one_by_one.close()


def main(argv=None):
    image_demo()


if __name__ == '__main__':
    tf.app.run()

cksl avatar Mar 22 '17 02:03 cksl

both in training and demo the resize function distorts the image, which cannot be good for learning... probably would be better to rescale keeping the aspect ratio and add padding? this is already what happens when the image is smaller than the given size..

andreapiso avatar Mar 23 '17 00:03 andreapiso

In my opinion(from the experience), KITTI dataset has sequential images(This means that sequential images have large correlation) and for test in paper they use randomly sampled training and validation data. This is why the result is different I recommend you to use split method in 3DOP paper they considered this problem and no same sequence in both side

ByeonghakYim avatar Mar 24 '17 08:03 ByeonghakYim

@ByeonghakYim Yes, KITTI train dataset has sequential images. Howerver, all the results in Table 2 of the paper should be from KITTI test set, otherwise , the comparison of mAP is meaningless.

cksl avatar Mar 25 '17 06:03 cksl

@cksl I was under the impression that you can only submit to the KITTI evaluation servers once per paper. I would definitely like clarification on this issue.

Edit: Seems to be a validation set actually https://github.com/BichenWuUCB/squeezeDet/issues/18

dojoscan avatar Apr 14 '17 06:04 dojoscan

@dojoscan ,Yes, One algorithm should run only once on the evaluation server.
Actually, I forgot to post further explanations for the evaluation above. The result is not run on the KITTI evaluation server. One classmate of mine have participated in KITTI pedestrian detection competition last year, and he labeled the KITTI test set, so the result is on our own label result. Yes, the label result was not identical to the KITTI (official) server groundtruth. (There is about 5 mAP gap with the official result according to his experience ). Before I apply this algorithm on my several object detection, I just run this algorithm on my classmates' test set to confirm its effectiveness . I find the problem above……

cksl avatar Apr 14 '17 07:04 cksl

@cksl Did you figured out this problem? I think the keep_score is too low, leading to too many false positive.

@BichenWuUCB I did not find the report on kitti test datasets in your paper. Did you run SqueezeDet on KITTI test datasets? What is the score you got from them?

avavavsf avatar May 18 '17 18:05 avavavsf

@BichenWuUCB - does your results in Table 2 (SqueezeDet & SqueezeDet+) refer to KITTI test set or validation set ?

yossibiton avatar Jan 04 '18 08:01 yossibiton

@BichenWuUCB seems result on validation and official Test benchmark differs greatly, I randomly split the training set and run the model(squeezedet), after some tune I got close to paper result, which is validation-result but same model with same hyperparams tested on official Test Benchmark only got: kitti-eval-result but the result reported on table compares with methods test on official test test, can you explain where is missed here? your feedback is really expected, thank you!

twangnh avatar May 06 '18 07:05 twangnh

@MrWanter Can you please help me with what hyperparamter changes (tuning) you made to get the same result as the paper? My training process has been futile as even when the loss is low (0.3-0.5) the mAP on validation set is low at around 61. Any help will be appreciated.

aditya1709 avatar Jun 18 '18 01:06 aditya1709

From my experience the low loss on training set does not guarantee a correspondent low mAP in test set. Maybe you got higher mAP on test set at a higher training loss (some previous checkpoint I mean).

eypros avatar May 21 '19 11:05 eypros