image annotation
hello,writer,what tools do you use to mark the images? thank you very much!
A labeling tool in my lab.
I only find some tools to mark the top left corner of the target and the bottom right corner,could you share your labeling tool?thank you!
Sorry, I can not provide this annotation tool because of the confidentiality of my lab.
On 12/06/2017 19:29, wyy1106 wrote:
I only find some tools to mark the top left corner of the target and the bottom right corner,could you share your labeling tool?thank you!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
OK,thanks for your reply.I will try something else.
Caused by op u'get_batch/batch', defined at:
File "train.py", line 279, in
OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, gradients/range/delta)]]
hello, writer, I have one question and I never found out what caused it.Looking forward to your reply.
It is possible that your tfrecord was not generated properly or that the path to tfrecord was not correct. For some reasons, the complete code of R2CNN has not been uploaded. I see that you are running train.py, which means you are running FPN. The FPN code is available at https://github.com/yangxue0827/FPN_Tensorflow.
Hi,good morning~these are my xml file form and result of running 'convert_data_to_tfrecord.py '
wyy@hsu-asus:~/R2CNN_FPN_Tensorflow$ python convert_data_to_tfrecord.py Conversion progress:[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>]100% 629/629 Conversion is complete! Segmentation fault (core dumped)
I think my tfrecord is generated properly and the path to tfrecord is correct.
and this is the result of running 'train.py',but when step601 arrived,it will show the following error: 2017-12-11 10:49:04: step81 image_name:1-(43)_10 | rpn_loc_loss:0.582690596581 | rpn_cla_loss:0.470375984907 | rpn_total_loss:1.05306661129 | fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.0460353717208 | fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.0549118816853 | fast_rcnn_total_loss:0.100947253406 | total_loss:1.99020338058 | pre_cost_time:7.59759879112s
Caused by op u'get_batch/batch', defined at: File "train.py", line 279, in train() File "train.py", line 37, in train is_training=True) File "/home/wyy/R2CNN_FPN_Tensorflow/data/io/read_tfrecord.py", line 90, in next_batch dynamic_pad=True) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 927, in batch name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 722, in _batch dequeued = queue.dequeue_many(batch_size, name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 464, in dequeue_many self._queue_ref, n=n, component_types=self._dtypes, name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 2418, in _queue_dequeue_many_v2 component_types=component_types, timeout_ms=timeout_ms, name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op op_def=op_def) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, gradients/range/delta)]]
I really appreciate your reply.
xml file form:
Segmentation fault (core dumped) result in a bad tfrecord. Please check your environment.
On 12/11/2017 10:52, wyy1106 wrote:
Hi,good morning~these are my xml file form and result of running 'convert_data_to_tfrecord.py '
boatUnspecified0029568505684972433724
wyy@hsu-asus:~/R2CNN_FPN_Tensorflow$ python convert_data_to_tfrecord.py Conversion progress:[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>]100% 629/629 Conversion is complete! Segmentation fault (core dumped)
I think my tfrecord is generated properly and the path to tfrecord is correct.
and this is the result of running 'train.py',but when step601 arrived,it will show the following error: 2017-12-11 10:49:04: step81 image_name:1-(43)_10 | rpn_loc_loss:0.582690596581 | rpn_cla_loss:0.470375984907 | rpn_total_loss:1.05306661129 | fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:0.0460353717208 | fast_rcnn_loc_rotate_loss:0.0 | fast_rcnn_cla_rotate_loss:0.0549118816853 | fast_rcnn_total_loss:0.100947253406 | total_loss:1.99020338058 | pre_cost_time:7.59759879112s
Caused by op u'get_batch/batch', defined at: File "train.py", line 279, in train() File "train.py", line 37, in train is_training=True) File "/home/wyy/R2CNN_FPN_Tensorflow/data/io/read_tfrecord.py", line 90, in next_batch dynamic_pad=True) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 927, in batch name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 722, in _batch dequeued = queue.dequeue_many(batch_size, name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 464, in dequeue_many self._queue_ref, n=n, component_types=self._dtypes, name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 2418, in _queue_dequeue_many_v2 component_types=component_types, timeout_ms=timeout_ms, name=name) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op op_def=op_def) File "/home/wyy/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, gradients/range/delta)]]
I really appreciate your reply.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
@wyy1106 I come the same problem with u.Can you solve it?If you have solved,can you share with me? Thank you!
@LiangSiyuan21 Sorry,I haven't solved the problem yet.And you?
@wyy1106 @LiangSiyuan21 If you are training in vocal data, the problem is in the data bndbox. You can look at sample.xml. The difference with the voc data is that you will find it has 8 digits. So the problem arises in the process of reading the tfrecord file.