FPN_Tensorflow
FPN_Tensorflow copied to clipboard
occurs errors when i execute train.py
2018-07-22 19:19:07.851388: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at matching_files_op.cc:49 : Not found: ../data/tfrecords; No such file or directory 2018-07-22 19:19:07.851869: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at matching_files_op.cc:49 : Not found: ../data/tfrecords; No such file or directory Traceback (most recent call last): File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call return fn(*args) File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1312, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1420, in _call_tf_sessionrun status, run_metadata) File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: ../data/tfrecords; No such file or directory [[Node: get_batch/matching_filenames/MatchingFiles = MatchingFiles_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./tools/train.py", line 229, in
Caused by op 'get_batch/matching_filenames/MatchingFiles', defined at:
File "./tools/train.py", line 229, in
NotFoundError (see above for traceback): ../data/tfrecords; No such file or directory [[Node: get_batch/matching_filenames/MatchingFiles = MatchingFiles_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
tip:no ../data/tfrecords directory, but I have this directory, position is correct
cd $FPN_ROOT/tools python train.py @chanyixialex
@yangxue0827 the same problem, but tip is no data/tfrecords directory, i think it should path's problem.
tensorflow.python.framework.errors_impl.NotFoundError: data/tfrecords; No such file or directory [[Node: get_batch/matching_filenames/MatchingFiles = MatchingFiles_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Caused by op 'get_batch/matching_filenames/MatchingFiles', defined at:
File "train.py", line 229, in
NotFoundError (see above for traceback): data/tfrecords; No such file or directory [[Node: get_batch/matching_filenames/MatchingFiles = MatchingFiles_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
it works. it is path's problem.
@yangxue0827 . it occurs next problem when i fix above. it seems tfrecord data problem. And first items data seems no problem. Thank you for your any help!
restore model 2018-07-23 20:56:06: step0 image_name:b'38bdd525-f626-4554-92ca-7ec4f95e5b2b.jpg' | rpn_loc_loss:0.2607109546661377 | rpn_cla_loss:1.163469672203064 | rpn_total_loss:1.4241806268692017 | fast_rcnn_loc_loss:0.2614363431930542 | fast_rcnn_cla_loss:0.8251640796661377 | fast_rcnn_total_loss:1.086600422859192 | total_loss:3.1513068675994873 | pre_cost_time:9.237893342971802s Traceback (most recent call last): File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call return fn(*args) File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1312, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1420, in _call_tf_sessionrun status, run_metadata) File "/vol/venvs/tf1.7/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.OutOfRangeError: PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 229, in
Caused by op 'get_batch/batch', defined at:
File "train.py", line 229, in
OutOfRangeError (see above for traceback): PaddingFIFOQueue '_1_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
@chanyixialex Have you solved this problem? I found the image that caused the error by Binary Search method, and deleted it and its corresponding xml, then the error disappeared.
@FannierPeng it may be data error, you can check corresponding xml. Or you can use other datasets to try again.
https://github.com/yangxue0827/FPN_Tensorflow/issues/36 @FannierPeng
Recommend improved code: https://github.com/DetectionTeamUCAS/FPN_Tensorflow. @chanyixialex @FannierPeng