Faster-RCNN_TF
Faster-RCNN_TF copied to clipboard
Problem run demo.py
`I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX 780 Ti major: 3 minor: 5 memoryClockRate (GHz) 1.0195 pciBusID 0000:01:00.0 Total memory: 2.95GiB Free memory: 2.03GiB W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x2262e60 I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties: name: Tesla K40c major: 3 minor: 5 memoryClockRate (GHz) 0.745 pciBusID 0000:02:00.0 Total memory: 11.17GiB Free memory: 11.10GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 780 Ti, pci bus id: 0000:01:00.0) I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Tesla K40c, pci bus id: 0000:02:00.0) Tensor("Placeholder:0", shape=(?, ?, ?, 3), dtype=float32) Tensor("conv5_3/conv5_3:0", shape=(?, ?, ?, 512), dtype=float32) Tensor("rpn_conv/3x3/rpn_conv/3x3:0", shape=(?, ?, ?, 512), dtype=float32) Tensor("rpn_cls_score/rpn_cls_score:0", shape=(?, ?, ?, 18), dtype=float32) Tensor("rpn_cls_prob:0", shape=(?, ?, ?, ?), dtype=float32) Tensor("rpn_cls_prob_reshape:0", shape=(?, ?, ?, 18), dtype=float32) Tensor("rpn_bbox_pred/rpn_bbox_pred:0", shape=(?, ?, ?, 36), dtype=float32) Tensor("Placeholder_1:0", shape=(?, 3), dtype=float32) Tensor("conv5_3/conv5_3:0", shape=(?, ?, ?, 512), dtype=float32) Tensor("rois:0", shape=(?, 5), dtype=float32) [<tf.Tensor 'conv5_3/conv5_3:0' shape=(?, ?, ?, 512) dtype=float32>, <tf.Tensor 'rois:0' shape=(?, 5) dtype=float32>] Tensor("fc7/fc7:0", shape=(?, 4096), dtype=float32)
Loaded network /mnt/data/Marcello/prove_faster/Faster-RCNN_TF/VGGnet_fast_rcnn_iter_70000.ckpt W tensorflow/core/common_runtime/bfc_allocator.cc:217] Ran out of memory trying to allocate 1.08GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. W tensorflow/core/common_runtime/bfc_allocator.cc:217] Ran out of memory trying to allocate 1.08GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. Traceback (most recent call last): File "/mnt/data/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/script_ops.py", line 85, in call ret = func(*args) File "/mnt/data/Marcello/prove_faster/Faster-RCNN_TF/tools/../lib/rpn_msr/proposal_layer_tf.py", line 48, in proposal_layer pre_nms_topN = cfg[cfg_key].RPN_PRE_NMS_TOP_N KeyError: b'TEST' W tensorflow/core/framework/op_kernel.cc:975] Internal: Failed to run py callback pyfunc_0: see error log. Traceback (most recent call last): File "/mnt/data/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1021, in _do_call return fn(*args) File "/mnt/data/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1003, in _run_fn status, run_metadata) File "/usr/lib/python3.5/contextlib.py", line 66, in exit next(self.gen) File "/mnt/data/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InternalError: Failed to run py callback pyfunc_0: see error log. [[Node: PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_STRING, DT_INT32, DT_INT32], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](rpn_cls_prob_reshape/_99, rpn_bbox_pred/rpn_bbox_pred/_101, _recv_Placeholder_1_0, PyFunc/input_3, PyFunc/input_4, PyFunc/input_5)]] [[Node: PyFunc/_103 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_228_PyFunc", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/demo.py", line 126, in
Caused by op 'PyFunc', defined at:
File "tools/demo.py", line 114, in
InternalError (see above for traceback): Failed to run py callback pyfunc_0: see error log. [[Node: PyFunc = PyFunc[Tin=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_STRING, DT_INT32, DT_INT32], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](rpn_cls_prob_reshape/_99, rpn_bbox_pred/rpn_bbox_pred/_101, _recv_Placeholder_1_0, PyFunc/input_3, PyFunc/input_4, PyFunc/input_5)]] [[Node: PyFunc/_103 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_228_PyFunc", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]] `
Having same problem. I am using Tensorflow '0.11.0'.
same problem as well on 0.12.1.
I can't decipher it yet but this network seems to use custom layers and maybe tied to an old tf.
i had same problem. but i did success to run demo.py.
my problem is that cfg's key is not string. so i fixed proposal_layer_tf.py like this
decode_cfg_key = cfg_key.decode('utf-8') pre_nms_topN = cfg[decode_cfg_key].RPN_PRE_NMS_TOP_N post_nms_topN = cfg[decode_cfg_key].RPN_POST_NMS_TOP_N nms_thresh = cfg[decode_cfg_key].RPN_NMS_THRESH min_size = cfg[decode_cfg_key].RPN_MIN_SIZE
Same problem with Tensorflow1.0 and python3.5. Solved with abc4698's answer.Thanks!
thanks for abc3698, I solved this problem!