FastMaskRCNN icon indicating copy to clipboard operation
FastMaskRCNN copied to clipboard

OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0), How do I solve this problem?

Open xingbowei opened this issue 7 years ago • 18 comments

xbw@xbw-P65xRP:~/FastMaskRCNN-master/train$ python train.py I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally P2 P3 P4 P5 /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate (GHz) 1.6705 pciBusID 0000:01:00.0 Total memory: 5.93GiB Free memory: 5.45GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0) --restore_previous_if_exists is set, but failed to restore in ./output/mask_rcnn/ None restoring resnet_v1_50/conv1/weights:0 restoring resnet_v1_50/conv1/BatchNorm/beta:0 restoring resnet_v1_50/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/logits/weights:0 restoring resnet_v1_50/logits/biases:0 Restored 267(544) vars from /home/xbw/FastMaskRCNN-master/data/pretrained_models/resnet_v1_50.ckpt W tensorflow/core/framework/op_kernel.cc:993] Failed precondition: /home/xbw/FastMaskRCNN-master/data/coco [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TFRecordReaderV2, input_producer)]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] Traceback (most recent call last): File "train.py", line 222, in train() File "train.py", line 190, in train batch_info ) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'random_shuffle_queue_Dequeue', defined at: File "train.py", line 222, in train() File "train.py", line 124, in train (image, ih, iw, gt_boxes, gt_masks, num_instances, img_id) = data_queue.dequeue() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 427, in dequeue self._queue_ref, self._dtypes, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1435, in _queue_dequeue_v2 timeout_ms=timeout_ms, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2395, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1264, in init self._traceback = _extract_stack()

OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

xingbowei avatar Jun 07 '17 13:06 xingbowei

I also have this error, does anyone knows how to fix it?

corrupt003 avatar Jun 13 '17 07:06 corrupt003

I also have this problem? pls help.

HuangBo-Terraloupe avatar Jun 19 '17 17:06 HuangBo-Terraloupe

In config_v1. Py, turn the relative path of tfrecords into an absolute path.

xingbowei avatar Jun 20 '17 07:06 xingbowei

Hello xingbowei, Did you successful launch the training? Can you share your FastMaskRCNN repository to me? I did not found the path of tfrecords in config_v1, but I do change the path in dataset_factory.py:

import glob from libs.datasets import coco import libs.preprocessings.coco_v1 as coco_preprocess

def get_dataset(dataset_name, split_name, dataset_dir, im_batch=1, is_training=False, file_pattern=None, reader=None): """""" if file_pattern is None: #file_pattern = dataset_name + '_' + split_name + '.tfrecord' file_pattern = '/home/huangbo/FastMaskRCNN/data/coco/' + '' temp = file_pattern tfrecords = glob.glob(temp) # tfrecords = glob.glob(dataset_dir + '/records/' + file_pattern) image, ih, iw, gt_boxes, gt_masks, num_instances, img_id = coco.read(tfrecords)

    image, gt_boxes, gt_masks = coco_preprocess.preprocess_image(image, gt_boxes, gt_masks, is_training)

return image, ih, iw, gt_boxes, gt_masks, num_instances, img_id

But it still has some bugs:

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally P2 P3 P4 P5 /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX 950M major: 5 minor: 0 memoryClockRate (GHz) 1.124 pciBusID 0000:01:00.0 Total memory: 3.95GiB Free memory: 3.41GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 950M, pci bus id: 0000:01:00.0) --restore_previous_if_exists is set, but failed to restore in ./output/mask_rcnn/ None restoring resnet_v1_50/conv1/weights:0 restoring resnet_v1_50/conv1/BatchNorm/beta:0 restoring resnet_v1_50/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block2/unit_4/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_4/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_5/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block3/unit_6/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/weights:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/weights:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv2/BatchNorm/moving_variance:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/weights:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/logits/weights:0 restoring resnet_v1_50/logits/biases:0 Restored 267(544) vars from /home/huangbo/FastMaskRCNN/data/pretrained_models/resnet_v1_50.ckpt W tensorflow/core/framework/op_kernel.cc:993] Failed precondition: /home/huangbo/FastMaskRCNN/data/coco [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TFRecordReaderV2, input_producer)]] W tensorflow/core/framework/op_kernel.cc:993] Failed precondition: /home/huangbo/FastMaskRCNN/data/coco [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TFRecordReaderV2, input_producer)]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] W tensorflow/core/framework/op_kernel.cc:993] Out of range: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] Traceback (most recent call last): File "train.py", line 222, in train() File "train.py", line 190, in train batch_info ) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'random_shuffle_queue_Dequeue', defined at: File "train.py", line 222, in train() File "train.py", line 124, in train (image, ih, iw, gt_boxes, gt_masks, num_instances, img_id) = data_queue.dequeue() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 427, in dequeue self._queue_ref, self._dtypes, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1435, in _queue_dequeue_v2 timeout_ms=timeout_ms, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2395, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1264, in init self._traceback = _extract_stack()

OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Segmentation fault (core dumped)

HuangBo-Terraloupe avatar Jun 20 '17 08:06 HuangBo-Terraloupe

same problem, please help

HuangBo-Terraloupe avatar Jun 21 '17 08:06 HuangBo-Terraloupe

I solved this problem. The reason is you should give the absolute path to the tfrecords. 1, in config_v1 change:

tf.app.flags.DEFINE_string( 'pretrained_model', '/home/huangbo/FastMaskRCNN/data/pretrained_models/resnet_v1_50.ckpt', 'Path to pretrained model')

tf.app.flags.DEFINE_string( 'dataset_dir', '/home/huangbo/FastMaskRCNN/data/coco/', 'The directory where the dataset files are stored.')

then the training should running

HuangBo-Terraloupe avatar Jun 21 '17 14:06 HuangBo-Terraloupe

change /home/huangbo/ to your path in ubuntu system.

sorry, I did not make it clear.

HuangBo-Terraloupe avatar Jun 21 '17 14:06 HuangBo-Terraloupe

Hello,HuangBo @HuangBo-Terraloupe ,I also have this problem,and I have changed the path in dataset_factory.py and in config_v1.py, but it still has same bugs. please help!

Duankaiwen avatar Jun 23 '17 15:06 Duankaiwen

@HuangBo-Terraloupe ,sorry,the bug on my computer is RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0), but not "_1_random_shuffle_queue". I don't know what's the difference.Help please!

Duankaiwen avatar Jun 23 '17 15:06 Duankaiwen

For this function: I will suggest you directly give the path of tfrecord to it: image, ih, iw, gt_boxes, gt_masks, num_instances, img_id = coco.read(tfrecords) for example: tfrecords = glob.glob('/home/huangbo/FastMaskRCNN/data/coco/records/coco_train2014_00000-of-00033.tfrecord')

If this does not works, I do not know why. maybe check the version of tensorflow

HuangBo-Terraloupe avatar Jun 26 '17 07:06 HuangBo-Terraloupe

Thanks a lot! @HuangBo-Terraloupe The problem has been solved with your help! I made a mistake, I turned the relative path of tfrecords into an absolute path in both dataset_factory.py and in config_v1.py. This a big mistake. Just changed the path in dataset_factory.py.

Duankaiwen avatar Jun 26 '17 13:06 Duankaiwen

@Duankaiwwen @HuangBo-Terraloupe @xingbowei hello,thanks for your solution,but i do as you say ...there is this error: P2 P3 P4 P5 /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " 2017-07-07 00:47:45.743178: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 00:47:45.743198: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 00:47:45.743203: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 00:47:45.743207: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 00:47:45.743210: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. --restore_previous_if_exists is set, but failed to restore in ./output/mask_rcnn/ None restoring resnet_v1_50/conv1/weights:0 restoring resnet_v1_50/conv1/BatchNorm/beta:0 restoring resnet_v1_50/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 ...... restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/logits/weights:0 restoring resnet_v1_50/logits/biases:0 Restored 267(544) vars from /home/cs/FastMaskRCNN/data/pretrained_models/resnet_v1_50.ckpt 2017-07-07 00:47:55.856949: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 00:47:55.862359: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 00:47:55.902815: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 00:47:55.944422: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 00:47:57.962777: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] Traceback (most recent call last): File "train.py", line 222, in train() File "train.py", line 190, in train batch_info ) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'random_shuffle_queue_Dequeue', defined at: File "train.py", line 222, in train() File "train.py", line 124, in train (image, ih, iw, gt_boxes, gt_masks, num_instances, img_id) = data_queue.dequeue() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 416, in dequeue self._queue_ref, self._dtypes, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1453, in _queue_dequeue_v2 timeout_ms=timeout_ms, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in init self._traceback = _extract_stack()

OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

///////////////////////////////////////////////above is error and warning,i have make change in file dataset_factory.py:(i have try #tfrecords and tfrecords) print(file_pattern) #tfrecords = glob.glob('/home/cs/FastMaskRCNN/'+dataset_dir + '/records/' + file_pattern) tfrecords = glob.glob('/home/cs/FastMaskRCNN/data/coco/records/coco_train2014*.tfrecord')

config_vq.py:(only one place have been changed, line 45 "dataset_dir" do not change) line 14) tf.app.flags.DEFINE_string( 'pretrained_model', '/home/cs/FastMaskRCNN/data/pretrained_models/resnet_v1_50.ckpt', 'Path to pretrained model') ////////////////////////////////// tensorflow-cpu 1.1.0
python 2.7 numpy 1.13.0 /////////////////////////////////////////Any help would be very appreciated,thank you very much!

lnuchiyo avatar Jul 07 '17 01:07 lnuchiyo

@lnuchiyo Don't change anything in the dataset_factory.py. Change the path in config_v1.py only. config_v1.py: tf.app.flags.DEFINE_string( 'train_dir', '/home/kwduan/FastMaskRCNN-master3/output/mask_rcnn/', 'Directory where checkpoints and event logs are written to.') ###################################################################### tf.app.flags.DEFINE_string( 'pretrained_model', '/ssd/kwduan/data/pretrained_models/resnet_v1_50.ckpt', 'Path to pretrained model') ###################################################################### tf.app.flags.DEFINE_string( 'dataset_dir', '/ssd/kwduan/data/coco/', 'The directory where the dataset files are stored.')

Duankaiwen avatar Jul 07 '17 02:07 Duankaiwen

@Duankaiwwen thanks for your help,but there is still errors: coco_train2014*.tfrecord P2 P3 P4 P5 /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " 2017-07-07 11:19:15.838301: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 11:19:15.838322: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 11:19:15.838326: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 11:19:15.838329: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-07 11:19:15.838347: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. --restore_previous_if_exists is set, but failed to restore in /home/cs/FastMaskRCNN/output/mask_rcnn/ None restoring resnet_v1_50/conv1/weights:0 restoring resnet_v1_50/conv1/BatchNorm/beta:0 restoring resnet_v1_50/conv1/BatchNorm/gamma:0 restoring resnet_v1_50/conv1/BatchNorm/moving_mean:0 restoring resnet_v1_50/conv1/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/weights:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0 restoring resnet_v1_50/block1/unit_1/bottleneck_v1/conv1/weights:0 ...... restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/beta:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/gamma:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_mean:0 restoring resnet_v1_50/block4/unit_3/bottleneck_v1/conv3/BatchNorm/moving_variance:0 restoring resnet_v1_50/logits/weights:0 restoring resnet_v1_50/logits/biases:0 Restored 267(544) vars from /home/cs/FastMaskRCNN/data/pretrained_models/resnet_v1_50.ckpt 2017-07-07 11:19:23.525755: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 11:19:23.590498: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 11:19:23.663300: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 11:19:23.720604: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: Name: , Feature: image/encoded (data type: string) is required but could not be found. 2017-07-07 11:19:25.543326: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] Traceback (most recent call last): File "train.py", line 222, in train() File "train.py", line 190, in train batch_info ) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'random_shuffle_queue_Dequeue', defined at: File "train.py", line 222, in train() File "train.py", line 124, in train (image, ih, iw, gt_boxes, gt_masks, num_instances, img_id) = data_queue.dequeue() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 416, in dequeue self._queue_ref, self._dtypes, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1453, in _queue_dequeue_v2 timeout_ms=timeout_ms, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in init self._traceback = _extract_stack()

OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: random_shuffle_queue_Dequeue = QueueDequeueV2component_types=[DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]] ################################# restore_previous_if_exists is set, but failed to restore in /home/cs/FastMaskRCNN/output/mask_rcnn/ None AND I HAVE changed file config_v1.py as your suggestion, but still failed to restore in this path.why and how to do, ####################################### i have install tensorflow-cpu and -gpu. i think tensorflow-gpu can influence the tensorflow-cpu,so uninstall gpu and reinstall tensorflow-cpu-1.1.0 there is "pip list" output :
adium-theme-ubuntu (0.3.4) backports.weakref (1.0rc1) bleach (1.5.0) cmake (0.7.1) configparser (3.5.0) cycler (0.10.0) Cython (0.25.2) decorator (4.0.6) easydict (1.7) funcsigs (1.0.2) functools32 (3.2.3.post2) html5lib (0.9999999) Markdown (2.2.0) matplotlib (2.0.2) mock (2.0.0) networkx (1.11) numpy (1.13.0) olefile (0.44) opencv-python (3.2.0.7) pbr (3.1.1) Pillow (4.2.0) pip (9.0.1) protobuf (3.3.0) pyparsing (2.2.0) python-dateutil (2.6.0) pytz (2017.2) PyWavelets (0.5.2) scikit-image (0.13.0) scipy (0.17.0) setuptools (36.0.1) six (1.10.0) subprocess32 (3.2.7) tensorflow (1.1.0) unity-lens-photos (1.0) Werkzeug (0.12.2) wheel (0.29.0) is there any error?

lnuchiyo avatar Jul 07 '17 03:07 lnuchiyo

I will suggest you uninstall tensorflow-cpu and install the tensorflow-gpu(1.0 or 1.1) only. Because by default, this code is run with gpu. If this does not work, I will suggest you follow HuangBo-Terraloupe's suggestion that directly give the path of tfrecord to it: image, ih, iw, gt_boxes, gt_masks, num_instances, img_id = coco.read(tfrecords) for example: tfrecords = glob.glob('///***/FastMaskRCNN/data/coco/records/coco_train2014_00000-of-00033.tfrecord') if this works, it proves that the problem is caused by the path, you should check the path carefully. If it still does not work, you can send your code to me by email if you like. I can try to run your code in my computer. If it still does not work,sorry, I don't know. My email: [email protected]

Duankaiwen avatar Jul 07 '17 05:07 Duankaiwen

@Duankaiwwen thanks a lot. i am doing as you advice.can you share your "pip list" output. i want to know your cython version.because there is a new error after install tensorflow-gpu: error Do not use this file, it is the result of a failed Cython compilation. ##########i have install cython but i look for solution,it might have to change Cython version. my Cython version is "Cython (0.25.2)"

lnuchiyo avatar Jul 07 '17 10:07 lnuchiyo

@lnuchiyo My Cython version is "Cython (0.25.2)" too, here is my "pip list", I hope this could help you ! ~$ pip list alabaster (0.7.10) anaconda-client (1.6.3) anaconda-navigator (1.6.2) anaconda-project (0.6.0) asn1crypto (0.22.0) astroid (1.4.9) astropy (1.3.2) Babel (2.4.0) backports-abc (0.5) backports.shutil-get-terminal-size (1.0.0) backports.ssl-match-hostname (3.4.0.2) backports.weakref (1.0rc1) beautifulsoup4 (4.6.0) bitarray (0.8.1) blaze (0.10.1) bleach (1.5.0) bokeh (0.12.5) boto (2.46.1) Bottleneck (1.2.1) cdecimal (2.3) cffi (1.10.0) chardet (3.0.3) click (6.7) cloudpickle (0.2.2) clyent (1.2.2) colorama (0.3.9) conda (4.3.22) configparser (3.5.0) contextlib2 (0.5.5) cryptography (1.8.1) cycler (0.10.0) Cython (0.25.2) cytoolz (0.8.2) dask (0.14.3) datashape (0.5.4) decorator (4.0.11) distributed (1.16.3) docutils (0.13.1) entrypoints (0.2.2) enum34 (1.1.6) et-xmlfile (1.0.1) fastcache (1.0.2) Flask (0.12.2) Flask-Cors (3.0.2) funcsigs (1.0.2) functools32 (3.2.3.post2) futures (3.1.1) gevent (1.2.1) greenlet (0.4.12) grin (1.2.1) h5py (2.7.0) HeapDict (1.0.0) html5lib (0.9999999) idna (2.5) imagesize (0.7.1) ipaddress (1.0.18) ipykernel (4.6.1) ipython (5.3.0) ipython-genutils (0.2.0) ipywidgets (6.0.0) isort (4.2.5) itsdangerous (0.24) jdcal (1.3) jedi (0.10.2) Jinja2 (2.9.6) jsonschema (2.6.0) jupyter (1.0.0) jupyter-client (5.0.1) jupyter-console (5.1.0) jupyter-core (4.3.0) lazy-object-proxy (1.2.2) llvmlite (0.18.0) locket (0.2.0) lxml (3.7.3) Markdown (2.6.8) MarkupSafe (0.23) matplotlib (2.0.2) mistune (0.7.4) mock (2.0.0) mpmath (0.19) msgpack-python (0.4.8) multipledispatch (0.4.9) navigator-updater (0.1.0) nbconvert (5.1.1) nbformat (4.3.0) networkx (1.11) nltk (3.2.3) nose (1.3.7) notebook (5.0.0) numba (0.33.0+0.ge79330a.dirty) numexpr (2.6.2) numpy (1.13.0) numpydoc (0.6.0) odo (0.5.0) olefile (0.44) openpyxl (2.4.7) packaging (16.8) pandas (0.20.1) pandocfilters (1.4.1) partd (0.3.8) pathlib2 (2.2.1) patsy (0.4.1) pbr (3.1.1) pep8 (1.7.0) pexpect (4.2.1) pickleshare (0.7.4) Pillow (4.1.1) pip (9.0.1) ply (3.10) prompt-toolkit (1.0.14) protobuf (3.3.0) psutil (5.2.2) ptyprocess (0.5.1) py (1.4.33) pycairo (1.10.0) pycosat (0.6.2) pycparser (2.17) pycrypto (2.6.1) pycurl (7.43.0) pyflakes (1.5.0) Pygments (2.2.0) pylint (1.6.4) pyodbc (4.0.16) pyOpenSSL (17.0.0) pyparsing (2.1.4) pytest (3.0.7) python-dateutil (2.6.0) pytz (2017.2) PyWavelets (0.5.2) PyYAML (3.12) pyzmq (16.0.2) QtAwesome (0.4.4) qtconsole (4.3.0) QtPy (1.2.1) requests (2.14.2) rope (0.9.4) scandir (1.5) scikit-image (0.13.0) scikit-learn (0.18.1) scipy (0.19.0) seaborn (0.7.1) setuptools (36.0.1) simplegeneric (0.8.1) singledispatch (3.4.0.3) six (1.10.0) snowballstemmer (1.2.1) sortedcollections (0.5.3) sortedcontainers (1.5.7) Sphinx (1.5.6) spyder (3.1.4) SQLAlchemy (1.1.9) statsmodels (0.8.0) subprocess32 (3.2.7) sympy (1.0) tables (3.3.0) tblib (1.3.2) tensorflow (1.1.0) terminado (0.6) testpath (0.3) toolz (0.8.2) tornado (4.5.1) traitlets (4.3.2) unicodecsv (0.14.1) wcwidth (0.1.7) Werkzeug (0.12.2) wheel (0.29.0) widgetsnbextension (2.0.0) wrapt (1.10.10) xlrd (1.0.0) XlsxWriter (0.9.6) xlwt (1.2.0) zict (0.1.2)

Duankaiwen avatar Jul 08 '17 07:07 Duankaiwen

@lnuchiyo Did you solved it?

yuye1992 avatar Sep 29 '17 07:09 yuye1992