squeezeDet icon indicating copy to clipboard operation
squeezeDet copied to clipboard

error in running demo.py

Open kaishijeng opened this issue 7 years ago • 9 comments

When I ran python ./src/demo.py, i got the following error:

Traceback (most recent call last): File "./src/demo.py", line 217, in tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "./src/demo.py", line 212, in main image_demo() File "./src/demo.py", line 175, in image_demo feed_dict={model.image_input:[input_image], model.keep_prob: 1.0}) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 922, in _run + e.args[0]) TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a float into a Tensor.

Any idea why this happens?

Thanks,

kaishijeng avatar May 25 '17 06:05 kaishijeng

Thanks for your question. That's a bug due to a recent update. Now it should be fixed. Could you please pull the update and try again?

BichenWuUCB avatar May 25 '17 07:05 BichenWuUCB

It works now. On speed performance, I found SqueezeDet is slower than tiny-yolo model of Darkflow on afirefly-3399 platform: SqueezeDet: 0.9138s/image vs Tiny-Yolo: 0.6s/image. This is a surprise to me as I expect SqueezeDet should run faster.

Thanks,

kaishijeng avatar May 27 '17 06:05 kaishijeng

Seems slow. What resolution are you using as input?

On Sat, 27 May 2017 at 2:15 PM, kaishijeng [email protected] wrote:

It works now. On speed performance, I found SqueezeDet is slower than tiny-yolo model of Darkflow on afirefly-3399 platform: SqueezeDet: 0.9138s/image vs Tiny-Yolo: 0.6s/image. This is a surprise to me as I expect SqueezeDet should run faster.

Thanks,

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BichenWuUCB/squeezeDet/issues/45#issuecomment-304431246, or mute the thread https://github.com/notifications/unsubscribe-auth/AN_wJhTlXxECk4u9w0vC40pH8yWaWstZks5r97-UgaJpZM4NmBBq .

andreapiso avatar May 27 '17 06:05 andreapiso

This measurement is from demo.py. The time is measured below:

    t_start = time.time()
    det_boxes, det_probs, det_class = sess.run(
        [model.det_boxes, model.det_probs, model.det_class],
        feed_dict={model.image_input:[input_image]})
    t_end = time.time()

    times['detect'] = t_end - t_start

Firefly-3399 has 2 A72 core running at 2Ghz and 4 A53 cores (??Ghz). Tensorflow version: 1.0.1

Thanks,

kaishijeng avatar May 27 '17 16:05 kaishijeng

I have converted VOC2012 datasets to KITTI format required by squeezedet. Training is running OK, but converge is very slow if I keep original image size 1248x384 in kitti_squeezeDet_config.py The issue of aspect ratio is quite distorted. If I change image size to 480x384, I got the following error during training:

Traceback (most recent call last): File "./src/train.py", line 345, in tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "./src/train.py", line 341, in main train() File "./src/train.py", line 128, in train model = SqueezeDet(mc) File "/home/spin/2TB/src/squeezeDet-voc/src/nets/squeezeDet.py", line 25, in init self._add_interpretation_graph() File "/home/spin/2TB/src/squeezeDet-voc/src/nn_skeleton.py", line 159, in _add_interpretation_graph name='pred_class_probs' File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 2630, in reshape name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2329, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1717, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1667, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn debug_python_shape_fn, require_shape_fn) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Cannot reshape a tensor with 2592000 elements to shape [20,16848,20] (6739200 elements) for 'interpret_output/pred_class_probs' (op: 'Reshape') with input shapes: [129600,20], [3].

Any other files should I change in order to to use this new image size?

Thanks,

kaishijeng avatar Jun 02 '17 14:06 kaishijeng

@kaishijeng Could you share you scripts to convert VOC to KITTI format?

ck196 avatar Jun 05 '17 02:06 ck196

Here is what I did:

  1. Follow Training YOLO on VOC in https://pjreddie.com/darknet/yolo to download Pascal VOC dataset.
  2. Use my modified voc_pascal_new.py (below) to generate squeezeDet format of label.

import xml.etree.ElementTree as ET import pickle import os from os import listdir, getcwd from os.path import join

sets=[('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]

classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]

def convert(size, box): dw = 1./size[0] dh = 1./size[1] x = (box[0] + box[1])/2.0 y = (box[2] + box[3])/2.0 w = box[1] - box[0] h = box[3] - box[2] x = xdw w = wdw y = ydh h = hdh return (box[0], box[2], box[1], box[3])

return (x,y,w,h)

def convert_annotation(year, image_id): in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id)) out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w') tree=ET.parse(in_file) root = tree.getroot() size = root.find('size') w = int(size.find('width').text) h = int(size.find('height').text)

for obj in root.iter('object'):
    difficult = obj.find('difficult').text
    cls = obj.find('name').text
    if cls not in classes or int(difficult) == 1:
        continue
    cls_id = classes.index(cls)
    xmlbox = obj.find('bndbox')
    b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
    bb = convert((w,h), b)
    out_file.write(classes[cls_id] + " "+"0.0"+ " " + "0" +" " + "0.0" + " " + " ".join([str(a) for a in bb]) + " " +"0.0 0.0 0.0" +" " +"0.0 0.0 0.0"+" "+"0.0 0.0"+'\n')

out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')

wd = getcwd()

for year, image_set in sets: if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)): os.makedirs('VOCdevkit/VOC%s/labels/'%(year)) image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split() list_file = open('%s_%s.txt'%(year, image_set), 'w') for image_id in image_ids: list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id)) convert_annotation(year, image_id) list_file.close()

kaishijeng avatar Jun 06 '17 03:06 kaishijeng

See the attachment for z voc_label_new.zip ip of voc_pascal_new.py

kaishijeng avatar Jun 06 '17 04:06 kaishijeng

I wonder how you fix the problem in _add_interpretation_graph @kaishijeng

Baby47 avatar Oct 30 '17 01:10 Baby47